Jan 21 10:19:45 np0005590810 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 21 10:19:45 np0005590810 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 21 10:19:45 np0005590810 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 10:19:45 np0005590810 kernel: BIOS-provided physical RAM map:
Jan 21 10:19:45 np0005590810 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 21 10:19:45 np0005590810 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 21 10:19:45 np0005590810 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 21 10:19:45 np0005590810 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 21 10:19:45 np0005590810 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 21 10:19:45 np0005590810 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 21 10:19:45 np0005590810 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 21 10:19:45 np0005590810 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 21 10:19:45 np0005590810 kernel: NX (Execute Disable) protection: active
Jan 21 10:19:45 np0005590810 kernel: APIC: Static calls initialized
Jan 21 10:19:45 np0005590810 kernel: SMBIOS 2.8 present.
Jan 21 10:19:45 np0005590810 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 21 10:19:45 np0005590810 kernel: Hypervisor detected: KVM
Jan 21 10:19:45 np0005590810 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 21 10:19:45 np0005590810 kernel: kvm-clock: using sched offset of 3203079939 cycles
Jan 21 10:19:45 np0005590810 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 21 10:19:45 np0005590810 kernel: tsc: Detected 2800.000 MHz processor
Jan 21 10:19:45 np0005590810 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 21 10:19:45 np0005590810 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 21 10:19:45 np0005590810 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 21 10:19:45 np0005590810 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 21 10:19:45 np0005590810 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 21 10:19:45 np0005590810 kernel: Using GB pages for direct mapping
Jan 21 10:19:45 np0005590810 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 21 10:19:45 np0005590810 kernel: ACPI: Early table checksum verification disabled
Jan 21 10:19:45 np0005590810 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 21 10:19:45 np0005590810 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 10:19:45 np0005590810 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 10:19:45 np0005590810 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 10:19:45 np0005590810 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 21 10:19:45 np0005590810 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 10:19:45 np0005590810 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 10:19:45 np0005590810 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 21 10:19:45 np0005590810 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 21 10:19:45 np0005590810 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 21 10:19:45 np0005590810 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 21 10:19:45 np0005590810 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 21 10:19:45 np0005590810 kernel: No NUMA configuration found
Jan 21 10:19:45 np0005590810 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 21 10:19:45 np0005590810 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 21 10:19:45 np0005590810 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 21 10:19:45 np0005590810 kernel: Zone ranges:
Jan 21 10:19:45 np0005590810 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 21 10:19:45 np0005590810 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 21 10:19:45 np0005590810 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 21 10:19:45 np0005590810 kernel:  Device   empty
Jan 21 10:19:45 np0005590810 kernel: Movable zone start for each node
Jan 21 10:19:45 np0005590810 kernel: Early memory node ranges
Jan 21 10:19:45 np0005590810 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 21 10:19:45 np0005590810 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 21 10:19:45 np0005590810 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 21 10:19:45 np0005590810 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 21 10:19:45 np0005590810 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 21 10:19:45 np0005590810 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 21 10:19:45 np0005590810 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 21 10:19:45 np0005590810 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 21 10:19:45 np0005590810 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 21 10:19:45 np0005590810 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 21 10:19:45 np0005590810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 21 10:19:45 np0005590810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 21 10:19:45 np0005590810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 21 10:19:45 np0005590810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 21 10:19:45 np0005590810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 21 10:19:45 np0005590810 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 21 10:19:45 np0005590810 kernel: TSC deadline timer available
Jan 21 10:19:45 np0005590810 kernel: CPU topo: Max. logical packages:   8
Jan 21 10:19:45 np0005590810 kernel: CPU topo: Max. logical dies:       8
Jan 21 10:19:45 np0005590810 kernel: CPU topo: Max. dies per package:   1
Jan 21 10:19:45 np0005590810 kernel: CPU topo: Max. threads per core:   1
Jan 21 10:19:45 np0005590810 kernel: CPU topo: Num. cores per package:     1
Jan 21 10:19:45 np0005590810 kernel: CPU topo: Num. threads per package:   1
Jan 21 10:19:45 np0005590810 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 21 10:19:45 np0005590810 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 21 10:19:45 np0005590810 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 21 10:19:45 np0005590810 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 21 10:19:45 np0005590810 kernel: Booting paravirtualized kernel on KVM
Jan 21 10:19:45 np0005590810 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 21 10:19:45 np0005590810 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 21 10:19:45 np0005590810 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 21 10:19:45 np0005590810 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 21 10:19:45 np0005590810 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 10:19:45 np0005590810 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 21 10:19:45 np0005590810 kernel: random: crng init done
Jan 21 10:19:45 np0005590810 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: Fallback order for Node 0: 0 
Jan 21 10:19:45 np0005590810 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 21 10:19:45 np0005590810 kernel: Policy zone: Normal
Jan 21 10:19:45 np0005590810 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 21 10:19:45 np0005590810 kernel: software IO TLB: area num 8.
Jan 21 10:19:45 np0005590810 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 21 10:19:45 np0005590810 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 21 10:19:45 np0005590810 kernel: ftrace: allocated 194 pages with 3 groups
Jan 21 10:19:45 np0005590810 kernel: Dynamic Preempt: voluntary
Jan 21 10:19:45 np0005590810 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 21 10:19:45 np0005590810 kernel: rcu: #011RCU event tracing is enabled.
Jan 21 10:19:45 np0005590810 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 21 10:19:45 np0005590810 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 21 10:19:45 np0005590810 kernel: #011Rude variant of Tasks RCU enabled.
Jan 21 10:19:45 np0005590810 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 21 10:19:45 np0005590810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 21 10:19:45 np0005590810 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 21 10:19:45 np0005590810 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 10:19:45 np0005590810 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 10:19:45 np0005590810 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 10:19:45 np0005590810 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 21 10:19:45 np0005590810 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 21 10:19:45 np0005590810 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 21 10:19:45 np0005590810 kernel: Console: colour VGA+ 80x25
Jan 21 10:19:45 np0005590810 kernel: printk: console [ttyS0] enabled
Jan 21 10:19:45 np0005590810 kernel: ACPI: Core revision 20230331
Jan 21 10:19:45 np0005590810 kernel: APIC: Switch to symmetric I/O mode setup
Jan 21 10:19:45 np0005590810 kernel: x2apic enabled
Jan 21 10:19:45 np0005590810 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 21 10:19:45 np0005590810 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 21 10:19:45 np0005590810 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 21 10:19:45 np0005590810 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 21 10:19:45 np0005590810 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 21 10:19:45 np0005590810 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 21 10:19:45 np0005590810 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 21 10:19:45 np0005590810 kernel: Spectre V2 : Mitigation: Retpolines
Jan 21 10:19:45 np0005590810 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 21 10:19:45 np0005590810 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 21 10:19:45 np0005590810 kernel: RETBleed: Mitigation: untrained return thunk
Jan 21 10:19:45 np0005590810 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 21 10:19:45 np0005590810 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 21 10:19:45 np0005590810 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 21 10:19:45 np0005590810 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 21 10:19:45 np0005590810 kernel: x86/bugs: return thunk changed
Jan 21 10:19:45 np0005590810 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 21 10:19:45 np0005590810 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 21 10:19:45 np0005590810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 21 10:19:45 np0005590810 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 21 10:19:45 np0005590810 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 21 10:19:45 np0005590810 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 21 10:19:45 np0005590810 kernel: Freeing SMP alternatives memory: 40K
Jan 21 10:19:45 np0005590810 kernel: pid_max: default: 32768 minimum: 301
Jan 21 10:19:45 np0005590810 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 21 10:19:45 np0005590810 kernel: landlock: Up and running.
Jan 21 10:19:45 np0005590810 kernel: Yama: becoming mindful.
Jan 21 10:19:45 np0005590810 kernel: SELinux:  Initializing.
Jan 21 10:19:45 np0005590810 kernel: LSM support for eBPF active
Jan 21 10:19:45 np0005590810 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 21 10:19:45 np0005590810 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 21 10:19:45 np0005590810 kernel: ... version:                0
Jan 21 10:19:45 np0005590810 kernel: ... bit width:              48
Jan 21 10:19:45 np0005590810 kernel: ... generic registers:      6
Jan 21 10:19:45 np0005590810 kernel: ... value mask:             0000ffffffffffff
Jan 21 10:19:45 np0005590810 kernel: ... max period:             00007fffffffffff
Jan 21 10:19:45 np0005590810 kernel: ... fixed-purpose events:   0
Jan 21 10:19:45 np0005590810 kernel: ... event mask:             000000000000003f
Jan 21 10:19:45 np0005590810 kernel: signal: max sigframe size: 1776
Jan 21 10:19:45 np0005590810 kernel: rcu: Hierarchical SRCU implementation.
Jan 21 10:19:45 np0005590810 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 21 10:19:45 np0005590810 kernel: smp: Bringing up secondary CPUs ...
Jan 21 10:19:45 np0005590810 kernel: smpboot: x86: Booting SMP configuration:
Jan 21 10:19:45 np0005590810 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 21 10:19:45 np0005590810 kernel: smp: Brought up 1 node, 8 CPUs
Jan 21 10:19:45 np0005590810 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 21 10:19:45 np0005590810 kernel: node 0 deferred pages initialised in 9ms
Jan 21 10:19:45 np0005590810 kernel: Memory: 7763576K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 21 10:19:45 np0005590810 kernel: devtmpfs: initialized
Jan 21 10:19:45 np0005590810 kernel: x86/mm: Memory block size: 128MB
Jan 21 10:19:45 np0005590810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 21 10:19:45 np0005590810 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 21 10:19:45 np0005590810 kernel: pinctrl core: initialized pinctrl subsystem
Jan 21 10:19:45 np0005590810 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 21 10:19:45 np0005590810 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 21 10:19:45 np0005590810 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 21 10:19:45 np0005590810 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 21 10:19:45 np0005590810 kernel: audit: initializing netlink subsys (disabled)
Jan 21 10:19:45 np0005590810 kernel: audit: type=2000 audit(1769008783.115:1): state=initialized audit_enabled=0 res=1
Jan 21 10:19:45 np0005590810 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 21 10:19:45 np0005590810 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 21 10:19:45 np0005590810 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 21 10:19:45 np0005590810 kernel: cpuidle: using governor menu
Jan 21 10:19:45 np0005590810 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 21 10:19:45 np0005590810 kernel: PCI: Using configuration type 1 for base access
Jan 21 10:19:45 np0005590810 kernel: PCI: Using configuration type 1 for extended access
Jan 21 10:19:45 np0005590810 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 21 10:19:45 np0005590810 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 21 10:19:45 np0005590810 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 21 10:19:45 np0005590810 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 21 10:19:45 np0005590810 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 21 10:19:45 np0005590810 kernel: Demotion targets for Node 0: null
Jan 21 10:19:45 np0005590810 kernel: cryptd: max_cpu_qlen set to 1000
Jan 21 10:19:45 np0005590810 kernel: ACPI: Added _OSI(Module Device)
Jan 21 10:19:45 np0005590810 kernel: ACPI: Added _OSI(Processor Device)
Jan 21 10:19:45 np0005590810 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 21 10:19:45 np0005590810 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 21 10:19:45 np0005590810 kernel: ACPI: Interpreter enabled
Jan 21 10:19:45 np0005590810 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 21 10:19:45 np0005590810 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 21 10:19:45 np0005590810 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 21 10:19:45 np0005590810 kernel: PCI: Using E820 reservations for host bridge windows
Jan 21 10:19:45 np0005590810 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 21 10:19:45 np0005590810 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 21 10:19:45 np0005590810 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [3] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [4] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [5] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [6] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [7] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [8] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [9] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [10] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [11] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [12] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [13] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [14] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [15] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [16] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [17] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [18] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [19] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [20] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [21] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [22] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [23] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [24] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [25] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [26] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [27] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [28] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [29] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [30] registered
Jan 21 10:19:45 np0005590810 kernel: acpiphp: Slot [31] registered
Jan 21 10:19:45 np0005590810 kernel: PCI host bridge to bus 0000:00
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 21 10:19:45 np0005590810 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 21 10:19:45 np0005590810 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 21 10:19:45 np0005590810 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 21 10:19:45 np0005590810 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 21 10:19:45 np0005590810 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 21 10:19:45 np0005590810 kernel: iommu: Default domain type: Translated
Jan 21 10:19:45 np0005590810 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 21 10:19:45 np0005590810 kernel: SCSI subsystem initialized
Jan 21 10:19:45 np0005590810 kernel: ACPI: bus type USB registered
Jan 21 10:19:45 np0005590810 kernel: usbcore: registered new interface driver usbfs
Jan 21 10:19:45 np0005590810 kernel: usbcore: registered new interface driver hub
Jan 21 10:19:45 np0005590810 kernel: usbcore: registered new device driver usb
Jan 21 10:19:45 np0005590810 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 21 10:19:45 np0005590810 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 21 10:19:45 np0005590810 kernel: PTP clock support registered
Jan 21 10:19:45 np0005590810 kernel: EDAC MC: Ver: 3.0.0
Jan 21 10:19:45 np0005590810 kernel: NetLabel: Initializing
Jan 21 10:19:45 np0005590810 kernel: NetLabel:  domain hash size = 128
Jan 21 10:19:45 np0005590810 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 21 10:19:45 np0005590810 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 21 10:19:45 np0005590810 kernel: PCI: Using ACPI for IRQ routing
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 21 10:19:45 np0005590810 kernel: vgaarb: loaded
Jan 21 10:19:45 np0005590810 kernel: clocksource: Switched to clocksource kvm-clock
Jan 21 10:19:45 np0005590810 kernel: VFS: Disk quotas dquot_6.6.0
Jan 21 10:19:45 np0005590810 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 21 10:19:45 np0005590810 kernel: pnp: PnP ACPI init
Jan 21 10:19:45 np0005590810 kernel: pnp: PnP ACPI: found 5 devices
Jan 21 10:19:45 np0005590810 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 21 10:19:45 np0005590810 kernel: NET: Registered PF_INET protocol family
Jan 21 10:19:45 np0005590810 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 21 10:19:45 np0005590810 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 21 10:19:45 np0005590810 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 21 10:19:45 np0005590810 kernel: NET: Registered PF_XDP protocol family
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 21 10:19:45 np0005590810 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 21 10:19:45 np0005590810 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 21 10:19:45 np0005590810 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 94933 usecs
Jan 21 10:19:45 np0005590810 kernel: PCI: CLS 0 bytes, default 64
Jan 21 10:19:45 np0005590810 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 21 10:19:45 np0005590810 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 21 10:19:45 np0005590810 kernel: ACPI: bus type thunderbolt registered
Jan 21 10:19:45 np0005590810 kernel: Trying to unpack rootfs image as initramfs...
Jan 21 10:19:45 np0005590810 kernel: Initialise system trusted keyrings
Jan 21 10:19:45 np0005590810 kernel: Key type blacklist registered
Jan 21 10:19:45 np0005590810 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 21 10:19:45 np0005590810 kernel: zbud: loaded
Jan 21 10:19:45 np0005590810 kernel: integrity: Platform Keyring initialized
Jan 21 10:19:45 np0005590810 kernel: integrity: Machine keyring initialized
Jan 21 10:19:45 np0005590810 kernel: Freeing initrd memory: 87956K
Jan 21 10:19:45 np0005590810 kernel: NET: Registered PF_ALG protocol family
Jan 21 10:19:45 np0005590810 kernel: xor: automatically using best checksumming function   avx       
Jan 21 10:19:45 np0005590810 kernel: Key type asymmetric registered
Jan 21 10:19:45 np0005590810 kernel: Asymmetric key parser 'x509' registered
Jan 21 10:19:45 np0005590810 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 21 10:19:45 np0005590810 kernel: io scheduler mq-deadline registered
Jan 21 10:19:45 np0005590810 kernel: io scheduler kyber registered
Jan 21 10:19:45 np0005590810 kernel: io scheduler bfq registered
Jan 21 10:19:45 np0005590810 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 21 10:19:45 np0005590810 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 21 10:19:45 np0005590810 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 21 10:19:45 np0005590810 kernel: ACPI: button: Power Button [PWRF]
Jan 21 10:19:45 np0005590810 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 21 10:19:45 np0005590810 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 21 10:19:45 np0005590810 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 21 10:19:45 np0005590810 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 21 10:19:45 np0005590810 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 21 10:19:45 np0005590810 kernel: Non-volatile memory driver v1.3
Jan 21 10:19:45 np0005590810 kernel: rdac: device handler registered
Jan 21 10:19:45 np0005590810 kernel: hp_sw: device handler registered
Jan 21 10:19:45 np0005590810 kernel: emc: device handler registered
Jan 21 10:19:45 np0005590810 kernel: alua: device handler registered
Jan 21 10:19:45 np0005590810 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 21 10:19:45 np0005590810 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 21 10:19:45 np0005590810 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 21 10:19:45 np0005590810 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 21 10:19:45 np0005590810 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 21 10:19:45 np0005590810 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 21 10:19:45 np0005590810 kernel: usb usb1: Product: UHCI Host Controller
Jan 21 10:19:45 np0005590810 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 21 10:19:45 np0005590810 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 21 10:19:45 np0005590810 kernel: hub 1-0:1.0: USB hub found
Jan 21 10:19:45 np0005590810 kernel: hub 1-0:1.0: 2 ports detected
Jan 21 10:19:45 np0005590810 kernel: usbcore: registered new interface driver usbserial_generic
Jan 21 10:19:45 np0005590810 kernel: usbserial: USB Serial support registered for generic
Jan 21 10:19:45 np0005590810 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 21 10:19:45 np0005590810 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 21 10:19:45 np0005590810 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 21 10:19:45 np0005590810 kernel: mousedev: PS/2 mouse device common for all mice
Jan 21 10:19:45 np0005590810 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 21 10:19:45 np0005590810 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 21 10:19:45 np0005590810 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 21 10:19:45 np0005590810 kernel: rtc_cmos 00:04: registered as rtc0
Jan 21 10:19:45 np0005590810 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 21 10:19:45 np0005590810 kernel: rtc_cmos 00:04: setting system clock to 2026-01-21T15:19:44 UTC (1769008784)
Jan 21 10:19:45 np0005590810 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 21 10:19:45 np0005590810 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 21 10:19:45 np0005590810 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 21 10:19:45 np0005590810 kernel: usbcore: registered new interface driver usbhid
Jan 21 10:19:45 np0005590810 kernel: usbhid: USB HID core driver
Jan 21 10:19:45 np0005590810 kernel: drop_monitor: Initializing network drop monitor service
Jan 21 10:19:45 np0005590810 kernel: Initializing XFRM netlink socket
Jan 21 10:19:45 np0005590810 kernel: NET: Registered PF_INET6 protocol family
Jan 21 10:19:45 np0005590810 kernel: Segment Routing with IPv6
Jan 21 10:19:45 np0005590810 kernel: NET: Registered PF_PACKET protocol family
Jan 21 10:19:45 np0005590810 kernel: mpls_gso: MPLS GSO support
Jan 21 10:19:45 np0005590810 kernel: IPI shorthand broadcast: enabled
Jan 21 10:19:45 np0005590810 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 21 10:19:45 np0005590810 kernel: AES CTR mode by8 optimization enabled
Jan 21 10:19:45 np0005590810 kernel: sched_clock: Marking stable (1287002931, 151330556)->(1530353164, -92019677)
Jan 21 10:19:45 np0005590810 kernel: registered taskstats version 1
Jan 21 10:19:45 np0005590810 kernel: Loading compiled-in X.509 certificates
Jan 21 10:19:45 np0005590810 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 21 10:19:45 np0005590810 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 21 10:19:45 np0005590810 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 21 10:19:45 np0005590810 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 21 10:19:45 np0005590810 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 21 10:19:45 np0005590810 kernel: Demotion targets for Node 0: null
Jan 21 10:19:45 np0005590810 kernel: page_owner is disabled
Jan 21 10:19:45 np0005590810 kernel: Key type .fscrypt registered
Jan 21 10:19:45 np0005590810 kernel: Key type fscrypt-provisioning registered
Jan 21 10:19:45 np0005590810 kernel: Key type big_key registered
Jan 21 10:19:45 np0005590810 kernel: Key type encrypted registered
Jan 21 10:19:45 np0005590810 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 21 10:19:45 np0005590810 kernel: Loading compiled-in module X.509 certificates
Jan 21 10:19:45 np0005590810 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 21 10:19:45 np0005590810 kernel: ima: Allocated hash algorithm: sha256
Jan 21 10:19:45 np0005590810 kernel: ima: No architecture policies found
Jan 21 10:19:45 np0005590810 kernel: evm: Initialising EVM extended attributes:
Jan 21 10:19:45 np0005590810 kernel: evm: security.selinux
Jan 21 10:19:45 np0005590810 kernel: evm: security.SMACK64 (disabled)
Jan 21 10:19:45 np0005590810 kernel: evm: security.SMACK64EXEC (disabled)
Jan 21 10:19:45 np0005590810 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 21 10:19:45 np0005590810 kernel: evm: security.SMACK64MMAP (disabled)
Jan 21 10:19:45 np0005590810 kernel: evm: security.apparmor (disabled)
Jan 21 10:19:45 np0005590810 kernel: evm: security.ima
Jan 21 10:19:45 np0005590810 kernel: evm: security.capability
Jan 21 10:19:45 np0005590810 kernel: evm: HMAC attrs: 0x1
Jan 21 10:19:45 np0005590810 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 21 10:19:45 np0005590810 kernel: Running certificate verification RSA selftest
Jan 21 10:19:45 np0005590810 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 21 10:19:45 np0005590810 kernel: Running certificate verification ECDSA selftest
Jan 21 10:19:45 np0005590810 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 21 10:19:45 np0005590810 kernel: clk: Disabling unused clocks
Jan 21 10:19:45 np0005590810 kernel: Freeing unused decrypted memory: 2028K
Jan 21 10:19:45 np0005590810 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 21 10:19:45 np0005590810 kernel: Write protecting the kernel read-only data: 30720k
Jan 21 10:19:45 np0005590810 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 21 10:19:45 np0005590810 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 21 10:19:45 np0005590810 kernel: Run /init as init process
Jan 21 10:19:45 np0005590810 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 21 10:19:45 np0005590810 systemd: Detected virtualization kvm.
Jan 21 10:19:45 np0005590810 systemd: Detected architecture x86-64.
Jan 21 10:19:45 np0005590810 systemd: Running in initrd.
Jan 21 10:19:45 np0005590810 systemd: No hostname configured, using default hostname.
Jan 21 10:19:45 np0005590810 systemd: Hostname set to <localhost>.
Jan 21 10:19:45 np0005590810 systemd: Initializing machine ID from VM UUID.
Jan 21 10:19:45 np0005590810 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 21 10:19:45 np0005590810 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 21 10:19:45 np0005590810 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 21 10:19:45 np0005590810 kernel: usb 1-1: Manufacturer: QEMU
Jan 21 10:19:45 np0005590810 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 21 10:19:45 np0005590810 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 21 10:19:45 np0005590810 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 21 10:19:45 np0005590810 systemd: Queued start job for default target Initrd Default Target.
Jan 21 10:19:45 np0005590810 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 21 10:19:45 np0005590810 systemd: Reached target Local Encrypted Volumes.
Jan 21 10:19:45 np0005590810 systemd: Reached target Initrd /usr File System.
Jan 21 10:19:45 np0005590810 systemd: Reached target Local File Systems.
Jan 21 10:19:45 np0005590810 systemd: Reached target Path Units.
Jan 21 10:19:45 np0005590810 systemd: Reached target Slice Units.
Jan 21 10:19:45 np0005590810 systemd: Reached target Swaps.
Jan 21 10:19:45 np0005590810 systemd: Reached target Timer Units.
Jan 21 10:19:45 np0005590810 systemd: Listening on D-Bus System Message Bus Socket.
Jan 21 10:19:45 np0005590810 systemd: Listening on Journal Socket (/dev/log).
Jan 21 10:19:45 np0005590810 systemd: Listening on Journal Socket.
Jan 21 10:19:45 np0005590810 systemd: Listening on udev Control Socket.
Jan 21 10:19:45 np0005590810 systemd: Listening on udev Kernel Socket.
Jan 21 10:19:45 np0005590810 systemd: Reached target Socket Units.
Jan 21 10:19:45 np0005590810 systemd: Starting Create List of Static Device Nodes...
Jan 21 10:19:45 np0005590810 systemd: Starting Journal Service...
Jan 21 10:19:45 np0005590810 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 21 10:19:45 np0005590810 systemd: Starting Apply Kernel Variables...
Jan 21 10:19:45 np0005590810 systemd: Starting Create System Users...
Jan 21 10:19:45 np0005590810 systemd: Starting Setup Virtual Console...
Jan 21 10:19:45 np0005590810 systemd: Finished Create List of Static Device Nodes.
Jan 21 10:19:45 np0005590810 systemd: Finished Apply Kernel Variables.
Jan 21 10:19:45 np0005590810 systemd-journald[303]: Journal started
Jan 21 10:19:45 np0005590810 systemd-journald[303]: Runtime Journal (/run/log/journal/ef0b02ddef52452fa99a26608ae61ceb) is 8.0M, max 153.6M, 145.6M free.
Jan 21 10:19:45 np0005590810 systemd-sysusers[308]: Creating group 'users' with GID 100.
Jan 21 10:19:45 np0005590810 systemd: Started Journal Service.
Jan 21 10:19:45 np0005590810 systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Jan 21 10:19:45 np0005590810 systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 21 10:19:45 np0005590810 systemd[1]: Finished Create System Users.
Jan 21 10:19:45 np0005590810 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 21 10:19:45 np0005590810 systemd[1]: Starting Create Volatile Files and Directories...
Jan 21 10:19:45 np0005590810 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 21 10:19:45 np0005590810 systemd[1]: Finished Create Volatile Files and Directories.
Jan 21 10:19:45 np0005590810 systemd[1]: Finished Setup Virtual Console.
Jan 21 10:19:45 np0005590810 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 21 10:19:45 np0005590810 systemd[1]: Starting dracut cmdline hook...
Jan 21 10:19:45 np0005590810 dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Jan 21 10:19:45 np0005590810 dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 10:19:45 np0005590810 systemd[1]: Finished dracut cmdline hook.
Jan 21 10:19:45 np0005590810 systemd[1]: Starting dracut pre-udev hook...
Jan 21 10:19:45 np0005590810 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 21 10:19:45 np0005590810 kernel: device-mapper: uevent: version 1.0.3
Jan 21 10:19:45 np0005590810 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 21 10:19:45 np0005590810 kernel: RPC: Registered named UNIX socket transport module.
Jan 21 10:19:45 np0005590810 kernel: RPC: Registered udp transport module.
Jan 21 10:19:45 np0005590810 kernel: RPC: Registered tcp transport module.
Jan 21 10:19:45 np0005590810 kernel: RPC: Registered tcp-with-tls transport module.
Jan 21 10:19:45 np0005590810 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 21 10:19:45 np0005590810 rpc.statd[441]: Version 2.5.4 starting
Jan 21 10:19:45 np0005590810 rpc.statd[441]: Initializing NSM state
Jan 21 10:19:45 np0005590810 rpc.idmapd[446]: Setting log level to 0
Jan 21 10:19:45 np0005590810 systemd[1]: Finished dracut pre-udev hook.
Jan 21 10:19:45 np0005590810 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 21 10:19:45 np0005590810 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 21 10:19:45 np0005590810 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 21 10:19:45 np0005590810 systemd[1]: Starting dracut pre-trigger hook...
Jan 21 10:19:45 np0005590810 systemd[1]: Finished dracut pre-trigger hook.
Jan 21 10:19:45 np0005590810 systemd[1]: Starting Coldplug All udev Devices...
Jan 21 10:19:45 np0005590810 systemd[1]: Created slice Slice /system/modprobe.
Jan 21 10:19:45 np0005590810 systemd[1]: Starting Load Kernel Module configfs...
Jan 21 10:19:45 np0005590810 systemd[1]: Finished Coldplug All udev Devices.
Jan 21 10:19:45 np0005590810 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 10:19:45 np0005590810 systemd[1]: Finished Load Kernel Module configfs.
Jan 21 10:19:45 np0005590810 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 21 10:19:45 np0005590810 systemd[1]: Reached target Network.
Jan 21 10:19:45 np0005590810 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 21 10:19:45 np0005590810 systemd[1]: Starting dracut initqueue hook...
Jan 21 10:19:45 np0005590810 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 21 10:19:45 np0005590810 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 21 10:19:45 np0005590810 kernel: vda: vda1
Jan 21 10:19:45 np0005590810 systemd-udevd[483]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 10:19:45 np0005590810 kernel: scsi host0: ata_piix
Jan 21 10:19:45 np0005590810 kernel: scsi host1: ata_piix
Jan 21 10:19:45 np0005590810 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 21 10:19:45 np0005590810 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 21 10:19:45 np0005590810 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target Initrd Root Device.
Jan 21 10:19:46 np0005590810 kernel: ata1: found unknown device (class 0)
Jan 21 10:19:46 np0005590810 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 21 10:19:46 np0005590810 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 21 10:19:46 np0005590810 systemd[1]: Mounting Kernel Configuration File System...
Jan 21 10:19:46 np0005590810 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 21 10:19:46 np0005590810 systemd[1]: Mounted Kernel Configuration File System.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target System Initialization.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target Basic System.
Jan 21 10:19:46 np0005590810 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 21 10:19:46 np0005590810 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 21 10:19:46 np0005590810 systemd[1]: Finished dracut initqueue hook.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target Remote File Systems.
Jan 21 10:19:46 np0005590810 systemd[1]: Starting dracut pre-mount hook...
Jan 21 10:19:46 np0005590810 systemd[1]: Finished dracut pre-mount hook.
Jan 21 10:19:46 np0005590810 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 21 10:19:46 np0005590810 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 21 10:19:46 np0005590810 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 21 10:19:46 np0005590810 systemd[1]: Mounting /sysroot...
Jan 21 10:19:46 np0005590810 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 21 10:19:46 np0005590810 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 21 10:19:46 np0005590810 kernel: XFS (vda1): Ending clean mount
Jan 21 10:19:46 np0005590810 systemd[1]: Mounted /sysroot.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target Initrd Root File System.
Jan 21 10:19:46 np0005590810 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 21 10:19:46 np0005590810 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 21 10:19:46 np0005590810 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target Initrd File Systems.
Jan 21 10:19:46 np0005590810 systemd[1]: Reached target Initrd Default Target.
Jan 21 10:19:46 np0005590810 systemd[1]: Starting dracut mount hook...
Jan 21 10:19:46 np0005590810 systemd[1]: Finished dracut mount hook.
Jan 21 10:19:46 np0005590810 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 21 10:19:46 np0005590810 rpc.idmapd[446]: exiting on signal 15
Jan 21 10:19:47 np0005590810 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Network.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Timer Units.
Jan 21 10:19:47 np0005590810 systemd[1]: dbus.socket: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 21 10:19:47 np0005590810 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Initrd Default Target.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Basic System.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Initrd Root Device.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Initrd /usr File System.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Path Units.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Remote File Systems.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Slice Units.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Socket Units.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target System Initialization.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Local File Systems.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Swaps.
Jan 21 10:19:47 np0005590810 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped dracut mount hook.
Jan 21 10:19:47 np0005590810 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped dracut pre-mount hook.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 21 10:19:47 np0005590810 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped dracut initqueue hook.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Apply Kernel Variables.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Coldplug All udev Devices.
Jan 21 10:19:47 np0005590810 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped dracut pre-trigger hook.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Setup Virtual Console.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 21 10:19:47 np0005590810 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Closed udev Control Socket.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Closed udev Kernel Socket.
Jan 21 10:19:47 np0005590810 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped dracut pre-udev hook.
Jan 21 10:19:47 np0005590810 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped dracut cmdline hook.
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Cleanup udev Database...
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 21 10:19:47 np0005590810 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Stopped Create System Users.
Jan 21 10:19:47 np0005590810 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Cleanup udev Database.
Jan 21 10:19:47 np0005590810 systemd[1]: Reached target Switch Root.
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Switch Root...
Jan 21 10:19:47 np0005590810 systemd[1]: Switching root.
Jan 21 10:19:47 np0005590810 systemd-journald[303]: Journal stopped
Jan 21 10:19:47 np0005590810 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 21 10:19:47 np0005590810 kernel: audit: type=1404 audit(1769008787.226:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 21 10:19:47 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 10:19:47 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 10:19:47 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 10:19:47 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 10:19:47 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 10:19:47 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 10:19:47 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 10:19:47 np0005590810 kernel: audit: type=1403 audit(1769008787.346:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 21 10:19:47 np0005590810 systemd: Successfully loaded SELinux policy in 123.128ms.
Jan 21 10:19:47 np0005590810 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.636ms.
Jan 21 10:19:47 np0005590810 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 21 10:19:47 np0005590810 systemd: Detected virtualization kvm.
Jan 21 10:19:47 np0005590810 systemd: Detected architecture x86-64.
Jan 21 10:19:47 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:19:47 np0005590810 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd: Stopped Switch Root.
Jan 21 10:19:47 np0005590810 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 21 10:19:47 np0005590810 systemd: Created slice Slice /system/getty.
Jan 21 10:19:47 np0005590810 systemd: Created slice Slice /system/serial-getty.
Jan 21 10:19:47 np0005590810 systemd: Created slice Slice /system/sshd-keygen.
Jan 21 10:19:47 np0005590810 systemd: Created slice User and Session Slice.
Jan 21 10:19:47 np0005590810 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 21 10:19:47 np0005590810 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 21 10:19:47 np0005590810 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 21 10:19:47 np0005590810 systemd: Reached target Local Encrypted Volumes.
Jan 21 10:19:47 np0005590810 systemd: Stopped target Switch Root.
Jan 21 10:19:47 np0005590810 systemd: Stopped target Initrd File Systems.
Jan 21 10:19:47 np0005590810 systemd: Stopped target Initrd Root File System.
Jan 21 10:19:47 np0005590810 systemd: Reached target Local Integrity Protected Volumes.
Jan 21 10:19:47 np0005590810 systemd: Reached target Path Units.
Jan 21 10:19:47 np0005590810 systemd: Reached target rpc_pipefs.target.
Jan 21 10:19:47 np0005590810 systemd: Reached target Slice Units.
Jan 21 10:19:47 np0005590810 systemd: Reached target Swaps.
Jan 21 10:19:47 np0005590810 systemd: Reached target Local Verity Protected Volumes.
Jan 21 10:19:47 np0005590810 systemd: Listening on RPCbind Server Activation Socket.
Jan 21 10:19:47 np0005590810 systemd: Reached target RPC Port Mapper.
Jan 21 10:19:47 np0005590810 systemd: Listening on Process Core Dump Socket.
Jan 21 10:19:47 np0005590810 systemd: Listening on initctl Compatibility Named Pipe.
Jan 21 10:19:47 np0005590810 systemd: Listening on udev Control Socket.
Jan 21 10:19:47 np0005590810 systemd: Listening on udev Kernel Socket.
Jan 21 10:19:47 np0005590810 systemd: Mounting Huge Pages File System...
Jan 21 10:19:47 np0005590810 systemd: Mounting POSIX Message Queue File System...
Jan 21 10:19:47 np0005590810 systemd: Mounting Kernel Debug File System...
Jan 21 10:19:47 np0005590810 systemd: Mounting Kernel Trace File System...
Jan 21 10:19:47 np0005590810 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 21 10:19:47 np0005590810 systemd: Starting Create List of Static Device Nodes...
Jan 21 10:19:47 np0005590810 systemd: Starting Load Kernel Module configfs...
Jan 21 10:19:47 np0005590810 systemd: Starting Load Kernel Module drm...
Jan 21 10:19:47 np0005590810 systemd: Starting Load Kernel Module efi_pstore...
Jan 21 10:19:47 np0005590810 systemd: Starting Load Kernel Module fuse...
Jan 21 10:19:47 np0005590810 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 21 10:19:47 np0005590810 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd: Stopped File System Check on Root Device.
Jan 21 10:19:47 np0005590810 systemd: Stopped Journal Service.
Jan 21 10:19:47 np0005590810 systemd: Starting Journal Service...
Jan 21 10:19:47 np0005590810 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 21 10:19:47 np0005590810 systemd: Starting Generate network units from Kernel command line...
Jan 21 10:19:47 np0005590810 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 10:19:47 np0005590810 systemd: Starting Remount Root and Kernel File Systems...
Jan 21 10:19:47 np0005590810 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 21 10:19:47 np0005590810 systemd: Starting Apply Kernel Variables...
Jan 21 10:19:47 np0005590810 kernel: fuse: init (API version 7.37)
Jan 21 10:19:47 np0005590810 systemd: Starting Coldplug All udev Devices...
Jan 21 10:19:47 np0005590810 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 21 10:19:47 np0005590810 systemd: Mounted Huge Pages File System.
Jan 21 10:19:47 np0005590810 systemd: Mounted POSIX Message Queue File System.
Jan 21 10:19:47 np0005590810 systemd: Mounted Kernel Debug File System.
Jan 21 10:19:47 np0005590810 systemd: Mounted Kernel Trace File System.
Jan 21 10:19:47 np0005590810 systemd: Finished Create List of Static Device Nodes.
Jan 21 10:19:47 np0005590810 systemd-journald[677]: Journal started
Jan 21 10:19:47 np0005590810 systemd-journald[677]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 21 10:19:47 np0005590810 systemd[1]: Queued start job for default target Multi-User System.
Jan 21 10:19:47 np0005590810 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd: modprobe@configfs.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd: Finished Load Kernel Module configfs.
Jan 21 10:19:47 np0005590810 systemd: Started Journal Service.
Jan 21 10:19:47 np0005590810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 21 10:19:47 np0005590810 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Load Kernel Module fuse.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 21 10:19:47 np0005590810 kernel: ACPI: bus type drm_connector registered
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Generate network units from Kernel command line.
Jan 21 10:19:47 np0005590810 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Load Kernel Module drm.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Apply Kernel Variables.
Jan 21 10:19:47 np0005590810 systemd[1]: Mounting FUSE Control File System...
Jan 21 10:19:47 np0005590810 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Rebuild Hardware Database...
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 21 10:19:47 np0005590810 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Load/Save OS Random Seed...
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Create System Users...
Jan 21 10:19:47 np0005590810 systemd-journald[677]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 21 10:19:47 np0005590810 systemd-journald[677]: Received client request to flush runtime journal.
Jan 21 10:19:47 np0005590810 systemd[1]: Mounted FUSE Control File System.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Load/Save OS Random Seed.
Jan 21 10:19:47 np0005590810 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Create System Users.
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Coldplug All udev Devices.
Jan 21 10:19:47 np0005590810 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 21 10:19:47 np0005590810 systemd[1]: Reached target Preparation for Local File Systems.
Jan 21 10:19:47 np0005590810 systemd[1]: Reached target Local File Systems.
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 21 10:19:47 np0005590810 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 21 10:19:47 np0005590810 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 21 10:19:47 np0005590810 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 21 10:19:47 np0005590810 systemd[1]: Starting Automatic Boot Loader Update...
Jan 21 10:19:48 np0005590810 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Create Volatile Files and Directories...
Jan 21 10:19:48 np0005590810 bootctl[696]: Couldn't find EFI system partition, skipping.
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Automatic Boot Loader Update.
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Create Volatile Files and Directories.
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Security Auditing Service...
Jan 21 10:19:48 np0005590810 systemd[1]: Starting RPC Bind...
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Rebuild Journal Catalog...
Jan 21 10:19:48 np0005590810 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 21 10:19:48 np0005590810 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Rebuild Journal Catalog.
Jan 21 10:19:48 np0005590810 systemd[1]: Started RPC Bind.
Jan 21 10:19:48 np0005590810 augenrules[707]: /sbin/augenrules: No change
Jan 21 10:19:48 np0005590810 augenrules[722]: No rules
Jan 21 10:19:48 np0005590810 augenrules[722]: enabled 1
Jan 21 10:19:48 np0005590810 augenrules[722]: failure 1
Jan 21 10:19:48 np0005590810 augenrules[722]: pid 702
Jan 21 10:19:48 np0005590810 augenrules[722]: rate_limit 0
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_limit 8192
Jan 21 10:19:48 np0005590810 augenrules[722]: lost 0
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog 4
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_wait_time 60000
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_wait_time_actual 0
Jan 21 10:19:48 np0005590810 augenrules[722]: enabled 1
Jan 21 10:19:48 np0005590810 augenrules[722]: failure 1
Jan 21 10:19:48 np0005590810 augenrules[722]: pid 702
Jan 21 10:19:48 np0005590810 augenrules[722]: rate_limit 0
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_limit 8192
Jan 21 10:19:48 np0005590810 augenrules[722]: lost 0
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog 4
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_wait_time 60000
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_wait_time_actual 0
Jan 21 10:19:48 np0005590810 augenrules[722]: enabled 1
Jan 21 10:19:48 np0005590810 augenrules[722]: failure 1
Jan 21 10:19:48 np0005590810 augenrules[722]: pid 702
Jan 21 10:19:48 np0005590810 augenrules[722]: rate_limit 0
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_limit 8192
Jan 21 10:19:48 np0005590810 augenrules[722]: lost 0
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog 8
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_wait_time 60000
Jan 21 10:19:48 np0005590810 augenrules[722]: backlog_wait_time_actual 0
Jan 21 10:19:48 np0005590810 systemd[1]: Started Security Auditing Service.
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Rebuild Hardware Database.
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Update is Completed...
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Update is Completed.
Jan 21 10:19:48 np0005590810 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Jan 21 10:19:48 np0005590810 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 21 10:19:48 np0005590810 systemd[1]: Reached target System Initialization.
Jan 21 10:19:48 np0005590810 systemd[1]: Started dnf makecache --timer.
Jan 21 10:19:48 np0005590810 systemd[1]: Started Daily rotation of log files.
Jan 21 10:19:48 np0005590810 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 21 10:19:48 np0005590810 systemd[1]: Reached target Timer Units.
Jan 21 10:19:48 np0005590810 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 21 10:19:48 np0005590810 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 21 10:19:48 np0005590810 systemd[1]: Reached target Socket Units.
Jan 21 10:19:48 np0005590810 systemd[1]: Starting D-Bus System Message Bus...
Jan 21 10:19:48 np0005590810 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Load Kernel Module configfs...
Jan 21 10:19:48 np0005590810 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 21 10:19:48 np0005590810 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 10:19:48 np0005590810 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Load Kernel Module configfs.
Jan 21 10:19:48 np0005590810 systemd[1]: Started D-Bus System Message Bus.
Jan 21 10:19:48 np0005590810 systemd[1]: Reached target Basic System.
Jan 21 10:19:48 np0005590810 dbus-broker-lau[766]: Ready
Jan 21 10:19:48 np0005590810 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 21 10:19:48 np0005590810 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 21 10:19:48 np0005590810 systemd[1]: Starting NTP client/server...
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 21 10:19:48 np0005590810 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 21 10:19:48 np0005590810 systemd[1]: Starting IPv4 firewall with iptables...
Jan 21 10:19:48 np0005590810 systemd[1]: Started irqbalance daemon.
Jan 21 10:19:48 np0005590810 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 21 10:19:48 np0005590810 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 21 10:19:48 np0005590810 chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 21 10:19:48 np0005590810 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 21 10:19:48 np0005590810 chronyd[791]: Loaded 0 symmetric keys
Jan 21 10:19:48 np0005590810 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 10:19:48 np0005590810 chronyd[791]: Using right/UTC timezone to obtain leap second data
Jan 21 10:19:48 np0005590810 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 10:19:48 np0005590810 chronyd[791]: Loaded seccomp filter (level 2)
Jan 21 10:19:48 np0005590810 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 10:19:48 np0005590810 systemd[1]: Reached target sshd-keygen.target.
Jan 21 10:19:48 np0005590810 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 21 10:19:48 np0005590810 systemd[1]: Reached target User and Group Name Lookups.
Jan 21 10:19:48 np0005590810 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 21 10:19:48 np0005590810 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 21 10:19:48 np0005590810 kernel: Console: switching to colour dummy device 80x25
Jan 21 10:19:48 np0005590810 systemd[1]: Starting User Login Management...
Jan 21 10:19:48 np0005590810 systemd[1]: Started NTP client/server.
Jan 21 10:19:48 np0005590810 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 21 10:19:48 np0005590810 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 21 10:19:48 np0005590810 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 21 10:19:48 np0005590810 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 21 10:19:48 np0005590810 kernel: [drm] features: -context_init
Jan 21 10:19:48 np0005590810 kernel: [drm] number of scanouts: 1
Jan 21 10:19:48 np0005590810 kernel: [drm] number of cap sets: 0
Jan 21 10:19:48 np0005590810 kernel: kvm_amd: TSC scaling supported
Jan 21 10:19:48 np0005590810 kernel: kvm_amd: Nested Virtualization enabled
Jan 21 10:19:48 np0005590810 kernel: kvm_amd: Nested Paging enabled
Jan 21 10:19:48 np0005590810 kernel: kvm_amd: LBR virtualization supported
Jan 21 10:19:48 np0005590810 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 21 10:19:48 np0005590810 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 21 10:19:48 np0005590810 kernel: Console: switching to colour frame buffer device 128x48
Jan 21 10:19:48 np0005590810 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 21 10:19:49 np0005590810 systemd-logind[795]: New seat seat0.
Jan 21 10:19:49 np0005590810 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 21 10:19:49 np0005590810 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 21 10:19:49 np0005590810 systemd[1]: Started User Login Management.
Jan 21 10:19:49 np0005590810 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Jan 21 10:19:49 np0005590810 systemd[1]: Finished IPv4 firewall with iptables.
Jan 21 10:19:49 np0005590810 cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Wed, 21 Jan 2026 15:19:49 +0000. Up 5.95 seconds.
Jan 21 10:19:49 np0005590810 systemd[1]: run-cloud\x2dinit-tmp-tmpz5om_ek7.mount: Deactivated successfully.
Jan 21 10:19:49 np0005590810 systemd[1]: Starting Hostname Service...
Jan 21 10:19:49 np0005590810 systemd[1]: Started Hostname Service.
Jan 21 10:19:49 np0005590810 systemd-hostnamed[854]: Hostname set to <np0005590810.novalocal> (static)
Jan 21 10:19:49 np0005590810 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 21 10:19:49 np0005590810 systemd[1]: Reached target Preparation for Network.
Jan 21 10:19:49 np0005590810 systemd[1]: Starting Network Manager...
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7531] NetworkManager (version 1.54.3-2.el9) is starting... (boot:270b975d-78fa-4cd0-8c03-59ef0f09243d)
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7535] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7602] manager[0x55e8cb0ee000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7651] hostname: hostname: using hostnamed
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7651] hostname: static hostname changed from (none) to "np0005590810.novalocal"
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7655] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7779] manager[0x55e8cb0ee000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7780] manager[0x55e8cb0ee000]: rfkill: WWAN hardware radio set enabled
Jan 21 10:19:49 np0005590810 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7828] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7829] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7830] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7831] manager: Networking is enabled by state file
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7833] settings: Loaded settings plugin: keyfile (internal)
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7847] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7867] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7878] dhcp: init: Using DHCP client 'internal'
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7880] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7893] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7899] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7906] device (lo): Activation: starting connection 'lo' (253b81e5-ac75-4452-a3e4-15be611c9139)
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7914] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7917] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7953] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7957] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7960] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7962] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7965] device (eth0): carrier: link connected
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7969] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7976] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 21 10:19:49 np0005590810 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.7995] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 10:19:49 np0005590810 systemd[1]: Started Network Manager.
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8000] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8001] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8004] manager: NetworkManager state is now CONNECTING
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8006] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:19:49 np0005590810 systemd[1]: Reached target Network.
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8014] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8018] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:19:49 np0005590810 systemd[1]: Starting Network Manager Wait Online...
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8064] dhcp4 (eth0): state changed new lease, address=38.129.56.235
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8070] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8091] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:19:49 np0005590810 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 21 10:19:49 np0005590810 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8176] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8178] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8179] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8185] device (lo): Activation: successful, device activated.
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8190] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8195] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8198] device (eth0): Activation: successful, device activated.
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8203] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 10:19:49 np0005590810 NetworkManager[859]: <info>  [1769008789.8205] manager: startup complete
Jan 21 10:19:49 np0005590810 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 21 10:19:49 np0005590810 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 21 10:19:49 np0005590810 systemd[1]: Reached target NFS client services.
Jan 21 10:19:49 np0005590810 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 21 10:19:49 np0005590810 systemd[1]: Reached target Remote File Systems.
Jan 21 10:19:49 np0005590810 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 10:19:49 np0005590810 systemd[1]: Finished Network Manager Wait Online.
Jan 21 10:19:49 np0005590810 systemd[1]: Starting Cloud-init: Network Stage...
Jan 21 10:19:50 np0005590810 cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Wed, 21 Jan 2026 15:19:50 +0000. Up 6.87 seconds.
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |  eth0  | True |        38.129.56.235        | 255.255.255.0 | global | fa:16:3e:9d:07:20 |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe9d:720/64 |       .       |  link  | fa:16:3e:9d:07:20 |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 21 10:19:50 np0005590810 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 10:19:51 np0005590810 cloud-init[923]: Generating public/private rsa key pair.
Jan 21 10:19:51 np0005590810 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 21 10:19:51 np0005590810 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 21 10:19:51 np0005590810 cloud-init[923]: The key fingerprint is:
Jan 21 10:19:51 np0005590810 cloud-init[923]: SHA256:6xLCYq75Q3aqmr0JX9WopzMcMmxLxTAQb9D70H5dV8o root@np0005590810.novalocal
Jan 21 10:19:51 np0005590810 cloud-init[923]: The key's randomart image is:
Jan 21 10:19:51 np0005590810 cloud-init[923]: +---[RSA 3072]----+
Jan 21 10:19:51 np0005590810 cloud-init[923]: |++               |
Jan 21 10:19:51 np0005590810 cloud-init[923]: | o+           .  |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |  o*       . o   |
Jan 21 10:19:51 np0005590810 cloud-init[923]: | .o +  o  . E    |
Jan 21 10:19:51 np0005590810 cloud-init[923]: | . *  o.S. .     |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |  @ Boo ..       |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |.B Oo+...        |
Jan 21 10:19:51 np0005590810 cloud-init[923]: | *=o+o..         |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |B+*o.o ..        |
Jan 21 10:19:51 np0005590810 cloud-init[923]: +----[SHA256]-----+
Jan 21 10:19:51 np0005590810 cloud-init[923]: Generating public/private ecdsa key pair.
Jan 21 10:19:51 np0005590810 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 21 10:19:51 np0005590810 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 21 10:19:51 np0005590810 cloud-init[923]: The key fingerprint is:
Jan 21 10:19:51 np0005590810 cloud-init[923]: SHA256:HxvvE11girE29/CsPFaOwVrg+FWScq0Oyx7tlG7VPIE root@np0005590810.novalocal
Jan 21 10:19:51 np0005590810 cloud-init[923]: The key's randomart image is:
Jan 21 10:19:51 np0005590810 cloud-init[923]: +---[ECDSA 256]---+
Jan 21 10:19:51 np0005590810 cloud-init[923]: |                 |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |          .   o  |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |           + =.. |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |          B BEo..|
Jan 21 10:19:51 np0005590810 cloud-init[923]: |        S+oB X +.|
Jan 21 10:19:51 np0005590810 cloud-init[923]: |        ..o*B.B.o|
Jan 21 10:19:51 np0005590810 cloud-init[923]: |         o+O=O  .|
Jan 21 10:19:51 np0005590810 cloud-init[923]: |          =*X .  |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |         ..o+o   |
Jan 21 10:19:51 np0005590810 cloud-init[923]: +----[SHA256]-----+
Jan 21 10:19:51 np0005590810 cloud-init[923]: Generating public/private ed25519 key pair.
Jan 21 10:19:51 np0005590810 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 21 10:19:51 np0005590810 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 21 10:19:51 np0005590810 cloud-init[923]: The key fingerprint is:
Jan 21 10:19:51 np0005590810 cloud-init[923]: SHA256:3odWA96yR47eHuqNqG5eYFWF1tGiVKEo0b0kGoU6wgI root@np0005590810.novalocal
Jan 21 10:19:51 np0005590810 cloud-init[923]: The key's randomart image is:
Jan 21 10:19:51 np0005590810 cloud-init[923]: +--[ED25519 256]--+
Jan 21 10:19:51 np0005590810 cloud-init[923]: |      .+...==+   |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |E     o.oo*.o .  |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |..   ..oo=oo .   |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |. o o .o .oo     |
Jan 21 10:19:51 np0005590810 cloud-init[923]: | . . .o S o =    |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |     . o . O .   |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |        o * =    |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |      .. + B .   |
Jan 21 10:19:51 np0005590810 cloud-init[923]: |     ++...=.+    |
Jan 21 10:19:51 np0005590810 cloud-init[923]: +----[SHA256]-----+
Jan 21 10:19:51 np0005590810 systemd[1]: Finished Cloud-init: Network Stage.
Jan 21 10:19:51 np0005590810 systemd[1]: Reached target Cloud-config availability.
Jan 21 10:19:51 np0005590810 systemd[1]: Reached target Network is Online.
Jan 21 10:19:51 np0005590810 systemd[1]: Starting Cloud-init: Config Stage...
Jan 21 10:19:51 np0005590810 systemd[1]: Starting Crash recovery kernel arming...
Jan 21 10:19:51 np0005590810 systemd[1]: Starting Notify NFS peers of a restart...
Jan 21 10:19:51 np0005590810 systemd[1]: Starting System Logging Service...
Jan 21 10:19:51 np0005590810 sm-notify[1005]: Version 2.5.4 starting
Jan 21 10:19:51 np0005590810 systemd[1]: Starting OpenSSH server daemon...
Jan 21 10:19:51 np0005590810 systemd[1]: Starting Permit User Sessions...
Jan 21 10:19:51 np0005590810 systemd[1]: Started Notify NFS peers of a restart.
Jan 21 10:19:51 np0005590810 systemd[1]: Finished Permit User Sessions.
Jan 21 10:19:51 np0005590810 systemd[1]: Started OpenSSH server daemon.
Jan 21 10:19:51 np0005590810 systemd[1]: Started Command Scheduler.
Jan 21 10:19:51 np0005590810 systemd[1]: Started Getty on tty1.
Jan 21 10:19:51 np0005590810 systemd[1]: Started Serial Getty on ttyS0.
Jan 21 10:19:51 np0005590810 systemd[1]: Reached target Login Prompts.
Jan 21 10:19:51 np0005590810 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 21 10:19:51 np0005590810 systemd[1]: Started System Logging Service.
Jan 21 10:19:51 np0005590810 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 21 10:19:51 np0005590810 systemd[1]: Reached target Multi-User System.
Jan 21 10:19:51 np0005590810 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 21 10:19:51 np0005590810 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 21 10:19:51 np0005590810 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 21 10:19:51 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 10:19:51 np0005590810 kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Jan 21 10:19:51 np0005590810 kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 21 10:19:51 np0005590810 cloud-init[1154]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Wed, 21 Jan 2026 15:19:51 +0000. Up 8.66 seconds.
Jan 21 10:19:52 np0005590810 systemd[1]: Finished Cloud-init: Config Stage.
Jan 21 10:19:52 np0005590810 systemd[1]: Starting Cloud-init: Final Stage...
Jan 21 10:19:52 np0005590810 dracut[1267]: dracut-057-102.git20250818.el9
Jan 21 10:19:52 np0005590810 dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 21 10:19:52 np0005590810 cloud-init[1295]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Wed, 21 Jan 2026 15:19:52 +0000. Up 9.07 seconds.
Jan 21 10:19:52 np0005590810 cloud-init[1326]: #############################################################
Jan 21 10:19:52 np0005590810 cloud-init[1331]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 21 10:19:52 np0005590810 cloud-init[1338]: 256 SHA256:HxvvE11girE29/CsPFaOwVrg+FWScq0Oyx7tlG7VPIE root@np0005590810.novalocal (ECDSA)
Jan 21 10:19:52 np0005590810 cloud-init[1344]: 256 SHA256:3odWA96yR47eHuqNqG5eYFWF1tGiVKEo0b0kGoU6wgI root@np0005590810.novalocal (ED25519)
Jan 21 10:19:52 np0005590810 cloud-init[1346]: 3072 SHA256:6xLCYq75Q3aqmr0JX9WopzMcMmxLxTAQb9D70H5dV8o root@np0005590810.novalocal (RSA)
Jan 21 10:19:52 np0005590810 cloud-init[1347]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 21 10:19:52 np0005590810 cloud-init[1348]: #############################################################
Jan 21 10:19:52 np0005590810 cloud-init[1295]: Cloud-init v. 24.4-8.el9 finished at Wed, 21 Jan 2026 15:19:52 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.25 seconds
Jan 21 10:19:52 np0005590810 systemd[1]: Finished Cloud-init: Final Stage.
Jan 21 10:19:52 np0005590810 systemd[1]: Reached target Cloud-init target.
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 21 10:19:52 np0005590810 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: memstrack is not available
Jan 21 10:19:53 np0005590810 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 21 10:19:53 np0005590810 dracut[1269]: memstrack is not available
Jan 21 10:19:53 np0005590810 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 21 10:19:53 np0005590810 dracut[1269]: *** Including module: systemd ***
Jan 21 10:19:54 np0005590810 dracut[1269]: *** Including module: fips ***
Jan 21 10:19:54 np0005590810 dracut[1269]: *** Including module: systemd-initrd ***
Jan 21 10:19:54 np0005590810 dracut[1269]: *** Including module: i18n ***
Jan 21 10:19:54 np0005590810 dracut[1269]: *** Including module: drm ***
Jan 21 10:19:54 np0005590810 chronyd[791]: Selected source 206.108.0.131 (2.centos.pool.ntp.org)
Jan 21 10:19:54 np0005590810 chronyd[791]: System clock TAI offset set to 37 seconds
Jan 21 10:19:55 np0005590810 dracut[1269]: *** Including module: prefixdevname ***
Jan 21 10:19:55 np0005590810 dracut[1269]: *** Including module: kernel-modules ***
Jan 21 10:19:55 np0005590810 kernel: block vda: the capability attribute has been deprecated.
Jan 21 10:19:55 np0005590810 dracut[1269]: *** Including module: kernel-modules-extra ***
Jan 21 10:19:55 np0005590810 dracut[1269]: *** Including module: qemu ***
Jan 21 10:19:55 np0005590810 dracut[1269]: *** Including module: fstab-sys ***
Jan 21 10:19:55 np0005590810 dracut[1269]: *** Including module: rootfs-block ***
Jan 21 10:19:55 np0005590810 dracut[1269]: *** Including module: terminfo ***
Jan 21 10:19:55 np0005590810 dracut[1269]: *** Including module: udev-rules ***
Jan 21 10:19:56 np0005590810 dracut[1269]: Skipping udev rule: 91-permissions.rules
Jan 21 10:19:56 np0005590810 dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 21 10:19:56 np0005590810 dracut[1269]: *** Including module: virtiofs ***
Jan 21 10:19:56 np0005590810 dracut[1269]: *** Including module: dracut-systemd ***
Jan 21 10:19:56 np0005590810 dracut[1269]: *** Including module: usrmount ***
Jan 21 10:19:56 np0005590810 dracut[1269]: *** Including module: base ***
Jan 21 10:19:56 np0005590810 dracut[1269]: *** Including module: fs-lib ***
Jan 21 10:19:56 np0005590810 dracut[1269]: *** Including module: kdumpbase ***
Jan 21 10:19:57 np0005590810 dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 21 10:19:57 np0005590810 dracut[1269]:  microcode_ctl module: mangling fw_dir
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 21 10:19:57 np0005590810 dracut[1269]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 21 10:19:57 np0005590810 dracut[1269]: *** Including module: openssl ***
Jan 21 10:19:57 np0005590810 dracut[1269]: *** Including module: shutdown ***
Jan 21 10:19:57 np0005590810 dracut[1269]: *** Including module: squash ***
Jan 21 10:19:57 np0005590810 dracut[1269]: *** Including modules done ***
Jan 21 10:19:57 np0005590810 dracut[1269]: *** Installing kernel module dependencies ***
Jan 21 10:19:58 np0005590810 dracut[1269]: *** Installing kernel module dependencies done ***
Jan 21 10:19:58 np0005590810 dracut[1269]: *** Resolving executable dependencies ***
Jan 21 10:19:59 np0005590810 irqbalance[784]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 21 10:19:59 np0005590810 irqbalance[784]: IRQ 35 affinity is now unmanaged
Jan 21 10:19:59 np0005590810 irqbalance[784]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 21 10:19:59 np0005590810 irqbalance[784]: IRQ 33 affinity is now unmanaged
Jan 21 10:19:59 np0005590810 irqbalance[784]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 21 10:19:59 np0005590810 irqbalance[784]: IRQ 31 affinity is now unmanaged
Jan 21 10:19:59 np0005590810 irqbalance[784]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 21 10:19:59 np0005590810 irqbalance[784]: IRQ 26 affinity is now unmanaged
Jan 21 10:19:59 np0005590810 irqbalance[784]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 21 10:19:59 np0005590810 irqbalance[784]: IRQ 34 affinity is now unmanaged
Jan 21 10:19:59 np0005590810 irqbalance[784]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 21 10:19:59 np0005590810 irqbalance[784]: IRQ 32 affinity is now unmanaged
Jan 21 10:19:59 np0005590810 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 10:20:00 np0005590810 dracut[1269]: *** Resolving executable dependencies done ***
Jan 21 10:20:00 np0005590810 dracut[1269]: *** Generating early-microcode cpio image ***
Jan 21 10:20:00 np0005590810 dracut[1269]: *** Store current command line parameters ***
Jan 21 10:20:00 np0005590810 dracut[1269]: Stored kernel commandline:
Jan 21 10:20:00 np0005590810 dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Jan 21 10:20:00 np0005590810 dracut[1269]: *** Install squash loader ***
Jan 21 10:20:01 np0005590810 dracut[1269]: *** Squashing the files inside the initramfs ***
Jan 21 10:20:02 np0005590810 dracut[1269]: *** Squashing the files inside the initramfs done ***
Jan 21 10:20:02 np0005590810 dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 21 10:20:02 np0005590810 dracut[1269]: *** Hardlinking files ***
Jan 21 10:20:02 np0005590810 dracut[1269]: *** Hardlinking files done ***
Jan 21 10:20:02 np0005590810 dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 21 10:20:03 np0005590810 kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Jan 21 10:20:03 np0005590810 kdumpctl[1016]: kdump: Starting kdump: [OK]
Jan 21 10:20:03 np0005590810 systemd[1]: Finished Crash recovery kernel arming.
Jan 21 10:20:03 np0005590810 systemd[1]: Startup finished in 1.669s (kernel) + 2.293s (initrd) + 15.989s (userspace) = 19.951s.
Jan 21 10:20:19 np0005590810 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 10:21:02 np0005590810 systemd[1]: Created slice User Slice of UID 1000.
Jan 21 10:21:02 np0005590810 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 21 10:21:02 np0005590810 systemd-logind[795]: New session 1 of user zuul.
Jan 21 10:21:02 np0005590810 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 21 10:21:02 np0005590810 systemd[1]: Starting User Manager for UID 1000...
Jan 21 10:21:03 np0005590810 systemd[4309]: Queued start job for default target Main User Target.
Jan 21 10:21:03 np0005590810 systemd[4309]: Created slice User Application Slice.
Jan 21 10:21:03 np0005590810 systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 10:21:03 np0005590810 systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 10:21:03 np0005590810 systemd[4309]: Reached target Paths.
Jan 21 10:21:03 np0005590810 systemd[4309]: Reached target Timers.
Jan 21 10:21:03 np0005590810 systemd[4309]: Starting D-Bus User Message Bus Socket...
Jan 21 10:21:03 np0005590810 systemd[4309]: Starting Create User's Volatile Files and Directories...
Jan 21 10:21:03 np0005590810 systemd[4309]: Finished Create User's Volatile Files and Directories.
Jan 21 10:21:03 np0005590810 systemd[4309]: Listening on D-Bus User Message Bus Socket.
Jan 21 10:21:03 np0005590810 systemd[4309]: Reached target Sockets.
Jan 21 10:21:03 np0005590810 systemd[4309]: Reached target Basic System.
Jan 21 10:21:03 np0005590810 systemd[4309]: Reached target Main User Target.
Jan 21 10:21:03 np0005590810 systemd[4309]: Startup finished in 118ms.
Jan 21 10:21:03 np0005590810 systemd[1]: Started User Manager for UID 1000.
Jan 21 10:21:03 np0005590810 systemd[1]: Started Session 1 of User zuul.
Jan 21 10:21:03 np0005590810 python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:21:07 np0005590810 python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:21:16 np0005590810 python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:21:17 np0005590810 python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 21 10:21:19 np0005590810 python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+G33o8H/fj1YoNWcIiLO+3c7GeV4q+KTvWM7WEnGL1PCppthtssRuaSwz4rzABhPgAB4at+zpXfdxnbE2GCCpUE2v5OC0JB5zqTlZq3LywSo98Qa/M/MZidSvF9nsKZvmcT+MeTgtIfBtvj6qW5ZSFZltrtzO/F+FtW2kH24bx5YHw8vi8egP5am5337lca5ouP0ZsgVoMPrpdfjxpxfuX2fl9729BCGcfqrKrIS9dV4WMPDLElsei36+025oey034s18y3jw71ZdKfaL6i57l+Ac2QjYIiCUGI/5qBQ+M82uvnrvcP7nSN7tNSDlJiCs14kIrjWaPQtAhMHp/PE3ebImU1aXOj7tlersgF6VcWzUZH5F9z2jBBRxD3tG8nzQpyQfroatoX4D0cUmmH0VxKhWC5xRelsbnGN1VwhduqP5dcKF8TnObAhMuZUIydLX9sKI08xHY0k9/kmSkfAQ84PRNpQcpZn7wrtJ7imsCrplj4HV/POY6+Ilkp5B74U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:19 np0005590810 python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:20 np0005590810 python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:21:20 np0005590810 python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769008880.009112-251-46378287243450/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=75b4bb1d18444a80b8341cac177014a8_id_rsa follow=False checksum=39181c04c4ee67c4b9dfa35c16d2c314a4f2f4bd backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:21 np0005590810 python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:21:21 np0005590810 python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769008880.9616864-306-276951273541570/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=75b4bb1d18444a80b8341cac177014a8_id_rsa.pub follow=False checksum=6a18df1bf8f6f144a70ce377ce5313de86ced9e3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:22 np0005590810 python3[4979]: ansible-ping Invoked with data=pong
Jan 21 10:21:23 np0005590810 python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:21:25 np0005590810 python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 21 10:21:27 np0005590810 python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:27 np0005590810 python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:27 np0005590810 python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:28 np0005590810 python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:28 np0005590810 python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:28 np0005590810 python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:30 np0005590810 python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:31 np0005590810 python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:21:31 np0005590810 python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769008890.6575403-31-109037935392127/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:32 np0005590810 python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:32 np0005590810 python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:32 np0005590810 python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:33 np0005590810 python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:33 np0005590810 python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:33 np0005590810 python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:33 np0005590810 python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:34 np0005590810 python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:34 np0005590810 python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:34 np0005590810 python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:35 np0005590810 python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:35 np0005590810 python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:35 np0005590810 python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:35 np0005590810 python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:36 np0005590810 python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:36 np0005590810 python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:36 np0005590810 python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:37 np0005590810 python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:37 np0005590810 python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:37 np0005590810 python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:37 np0005590810 python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:38 np0005590810 python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:38 np0005590810 python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:38 np0005590810 python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:39 np0005590810 python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:39 np0005590810 python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:21:41 np0005590810 python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 10:21:41 np0005590810 systemd[1]: Starting Time & Date Service...
Jan 21 10:21:41 np0005590810 systemd[1]: Started Time & Date Service.
Jan 21 10:21:41 np0005590810 systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 21 10:21:41 np0005590810 python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:42 np0005590810 python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:21:42 np0005590810 python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769008902.1066825-251-132862331777588/source _original_basename=tmpi1y_tr3c follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:43 np0005590810 python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:21:43 np0005590810 python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769008903.0155933-301-137137641442127/source _original_basename=tmpgyg1vcvk follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:44 np0005590810 python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:21:44 np0005590810 python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769008904.1464646-381-211599342276324/source _original_basename=tmpqc9ij9zu follow=False checksum=1dd7e9a6b0c4920bce3e0a7c3a8145e06480c76d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:45 np0005590810 python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:21:45 np0005590810 python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:21:45 np0005590810 python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:21:46 np0005590810 python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769008905.698674-451-141282683868968/source _original_basename=tmpi7p9abad follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:21:46 np0005590810 python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-3609-1a7d-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:21:47 np0005590810 python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-3609-1a7d-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 21 10:21:48 np0005590810 python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:22:06 np0005590810 python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:22:11 np0005590810 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 21 10:22:56 np0005590810 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 21 10:22:56 np0005590810 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4581] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 10:22:56 np0005590810 systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4795] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4827] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4829] device (eth1): carrier: link connected
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4831] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4837] policy: auto-activating connection 'Wired connection 1' (1a61991b-b038-3a52-8990-88651b7c1e06)
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4840] device (eth1): Activation: starting connection 'Wired connection 1' (1a61991b-b038-3a52-8990-88651b7c1e06)
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4841] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4844] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4847] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:22:56 np0005590810 NetworkManager[859]: <info>  [1769008976.4851] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:22:57 np0005590810 python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-b796-0939-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:23:07 np0005590810 python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:23:08 np0005590810 python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769008987.3947105-104-148131692049996/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=679ba2fd3e81e85c51aa2ff82c96e24daa0f6bf5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:23:08 np0005590810 python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 10:23:08 np0005590810 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 21 10:23:08 np0005590810 systemd[1]: Stopped Network Manager Wait Online.
Jan 21 10:23:08 np0005590810 systemd[1]: Stopping Network Manager Wait Online...
Jan 21 10:23:08 np0005590810 systemd[1]: Stopping Network Manager...
Jan 21 10:23:08 np0005590810 NetworkManager[859]: <info>  [1769008988.9496] caught SIGTERM, shutting down normally.
Jan 21 10:23:08 np0005590810 NetworkManager[859]: <info>  [1769008988.9505] dhcp4 (eth0): canceled DHCP transaction
Jan 21 10:23:08 np0005590810 NetworkManager[859]: <info>  [1769008988.9505] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:23:08 np0005590810 NetworkManager[859]: <info>  [1769008988.9505] dhcp4 (eth0): state changed no lease
Jan 21 10:23:08 np0005590810 NetworkManager[859]: <info>  [1769008988.9507] manager: NetworkManager state is now CONNECTING
Jan 21 10:23:08 np0005590810 NetworkManager[859]: <info>  [1769008988.9645] dhcp4 (eth1): canceled DHCP transaction
Jan 21 10:23:08 np0005590810 NetworkManager[859]: <info>  [1769008988.9647] dhcp4 (eth1): state changed no lease
Jan 21 10:23:08 np0005590810 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 10:23:08 np0005590810 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 10:23:08 np0005590810 NetworkManager[859]: <info>  [1769008988.9863] exiting (success)
Jan 21 10:23:09 np0005590810 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 21 10:23:09 np0005590810 systemd[1]: Stopped Network Manager.
Jan 21 10:23:09 np0005590810 systemd[1]: NetworkManager.service: Consumed 1.129s CPU time, 10.0M memory peak.
Jan 21 10:23:09 np0005590810 systemd[1]: Starting Network Manager...
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.0379] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:270b975d-78fa-4cd0-8c03-59ef0f09243d)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.0384] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.0438] manager[0x559c81086000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 10:23:09 np0005590810 systemd[1]: Starting Hostname Service...
Jan 21 10:23:09 np0005590810 systemd[1]: Started Hostname Service.
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1205] hostname: hostname: using hostnamed
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1205] hostname: static hostname changed from (none) to "np0005590810.novalocal"
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1210] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1215] manager[0x559c81086000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1216] manager[0x559c81086000]: rfkill: WWAN hardware radio set enabled
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1241] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1243] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1244] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1245] manager: Networking is enabled by state file
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1247] settings: Loaded settings plugin: keyfile (internal)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1250] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1270] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1280] dhcp: init: Using DHCP client 'internal'
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1283] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1288] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1292] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1298] device (lo): Activation: starting connection 'lo' (253b81e5-ac75-4452-a3e4-15be611c9139)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1303] device (eth0): carrier: link connected
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1306] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1310] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1311] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1315] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1320] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1330] device (eth1): carrier: link connected
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1338] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1347] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (1a61991b-b038-3a52-8990-88651b7c1e06) (indicated)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1348] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1356] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1366] device (eth1): Activation: starting connection 'Wired connection 1' (1a61991b-b038-3a52-8990-88651b7c1e06)
Jan 21 10:23:09 np0005590810 systemd[1]: Started Network Manager.
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1384] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1390] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1394] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1398] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1402] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1409] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1413] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1418] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1424] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1433] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1440] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1454] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1460] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1476] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1483] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1489] device (lo): Activation: successful, device activated.
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1497] dhcp4 (eth0): state changed new lease, address=38.129.56.235
Jan 21 10:23:09 np0005590810 systemd[1]: Starting Network Manager Wait Online...
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1503] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1569] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1620] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1624] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1628] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1632] device (eth0): Activation: successful, device activated.
Jan 21 10:23:09 np0005590810 NetworkManager[7198]: <info>  [1769008989.1641] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 10:23:09 np0005590810 python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-b796-0939-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:23:19 np0005590810 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 10:23:36 np0005590810 systemd[4309]: Starting Mark boot as successful...
Jan 21 10:23:36 np0005590810 systemd[4309]: Finished Mark boot as successful.
Jan 21 10:23:39 np0005590810 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.2616] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 10:23:54 np0005590810 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 10:23:54 np0005590810 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.2909] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.2911] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.2917] device (eth1): Activation: successful, device activated.
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.2923] manager: startup complete
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.2925] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <warn>  [1769009034.2929] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.2934] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 21 10:23:54 np0005590810 systemd[1]: Finished Network Manager Wait Online.
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3083] dhcp4 (eth1): canceled DHCP transaction
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3084] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3084] dhcp4 (eth1): state changed no lease
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3097] policy: auto-activating connection 'ci-private-network' (6f25b135-dfb2-58ee-a797-68bd03650dcd)
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3102] device (eth1): Activation: starting connection 'ci-private-network' (6f25b135-dfb2-58ee-a797-68bd03650dcd)
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3103] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3105] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3110] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3118] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3143] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3145] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:23:54 np0005590810 NetworkManager[7198]: <info>  [1769009034.3149] device (eth1): Activation: successful, device activated.
Jan 21 10:24:04 np0005590810 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 10:24:09 np0005590810 systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Jan 21 10:25:10 np0005590810 systemd-logind[795]: New session 3 of user zuul.
Jan 21 10:25:10 np0005590810 systemd[1]: Started Session 3 of User zuul.
Jan 21 10:25:10 np0005590810 python3[7376]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:25:11 np0005590810 python3[7449]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769009110.4371943-373-219941556845706/source _original_basename=tmpe52qvhlx follow=False checksum=1a0e14f3f5399218337eaa8103b57076a3f1e039 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:25:15 np0005590810 systemd[1]: session-3.scope: Deactivated successfully.
Jan 21 10:25:15 np0005590810 systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Jan 21 10:25:15 np0005590810 systemd-logind[795]: Removed session 3.
Jan 21 10:26:36 np0005590810 systemd[4309]: Created slice User Background Tasks Slice.
Jan 21 10:26:36 np0005590810 systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Jan 21 10:26:36 np0005590810 systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Jan 21 10:32:29 np0005590810 systemd-logind[795]: New session 4 of user zuul.
Jan 21 10:32:29 np0005590810 systemd[1]: Started Session 4 of User zuul.
Jan 21 10:32:29 np0005590810 python3[7511]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-66dc-b61c-00000000217d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:32:30 np0005590810 python3[7539]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:32:30 np0005590810 python3[7566]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:32:30 np0005590810 python3[7592]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:32:30 np0005590810 python3[7618]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:32:31 np0005590810 python3[7644]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:32:32 np0005590810 python3[7722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:32:32 np0005590810 python3[7795]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769009551.7078836-536-96265173757684/source _original_basename=tmpmyoacgrz follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:32:33 np0005590810 python3[7845]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 10:32:33 np0005590810 systemd[1]: Reloading.
Jan 21 10:32:33 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:32:35 np0005590810 python3[7901]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 21 10:32:35 np0005590810 python3[7927]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:32:35 np0005590810 python3[7955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:32:36 np0005590810 python3[7983]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:32:36 np0005590810 python3[8011]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:32:37 np0005590810 python3[8038]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-66dc-b61c-000000002184-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:32:37 np0005590810 python3[8068]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 10:32:39 np0005590810 irqbalance[784]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 21 10:32:39 np0005590810 irqbalance[784]: IRQ 30 affinity is now unmanaged
Jan 21 10:32:40 np0005590810 systemd[1]: session-4.scope: Deactivated successfully.
Jan 21 10:32:40 np0005590810 systemd[1]: session-4.scope: Consumed 4.083s CPU time.
Jan 21 10:32:40 np0005590810 systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Jan 21 10:32:40 np0005590810 systemd-logind[795]: Removed session 4.
Jan 21 10:32:42 np0005590810 systemd-logind[795]: New session 5 of user zuul.
Jan 21 10:32:42 np0005590810 systemd[1]: Started Session 5 of User zuul.
Jan 21 10:32:42 np0005590810 python3[8102]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 10:32:48 np0005590810 setsebool[8141]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 21 10:32:48 np0005590810 setsebool[8141]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 21 10:33:02 np0005590810 kernel: SELinux:  Converting 383 SID table entries...
Jan 21 10:33:02 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 10:33:02 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 10:33:02 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 10:33:02 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 10:33:02 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 10:33:02 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 10:33:02 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 10:33:13 np0005590810 kernel: SELinux:  Converting 386 SID table entries...
Jan 21 10:33:13 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 10:33:13 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 10:33:13 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 10:33:13 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 10:33:13 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 10:33:13 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 10:33:13 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 10:33:31 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 21 10:33:31 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 10:33:31 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 10:33:31 np0005590810 systemd[1]: Reloading.
Jan 21 10:33:31 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:33:31 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 10:33:34 np0005590810 python3[10831]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-c0bc-196a-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:33:34 np0005590810 kernel: evm: overlay not supported
Jan 21 10:33:34 np0005590810 systemd[4309]: Starting D-Bus User Message Bus...
Jan 21 10:33:34 np0005590810 dbus-broker-launch[11900]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 21 10:33:34 np0005590810 dbus-broker-launch[11900]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 21 10:33:34 np0005590810 systemd[4309]: Started D-Bus User Message Bus.
Jan 21 10:33:34 np0005590810 dbus-broker-lau[11900]: Ready
Jan 21 10:33:34 np0005590810 systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 21 10:33:34 np0005590810 systemd[4309]: Created slice Slice /user.
Jan 21 10:33:34 np0005590810 systemd[4309]: podman-11767.scope: unit configures an IP firewall, but not running as root.
Jan 21 10:33:34 np0005590810 systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Jan 21 10:33:34 np0005590810 systemd[4309]: Started podman-11767.scope.
Jan 21 10:33:35 np0005590810 systemd[4309]: Started podman-pause-41bb5373.scope.
Jan 21 10:33:35 np0005590810 python3[12699]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.27:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.27:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:33:35 np0005590810 python3[12699]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 21 10:33:36 np0005590810 systemd[1]: session-5.scope: Deactivated successfully.
Jan 21 10:33:36 np0005590810 systemd[1]: session-5.scope: Consumed 47.223s CPU time.
Jan 21 10:33:36 np0005590810 systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Jan 21 10:33:36 np0005590810 systemd-logind[795]: Removed session 5.
Jan 21 10:34:02 np0005590810 systemd-logind[795]: New session 6 of user zuul.
Jan 21 10:34:02 np0005590810 systemd[1]: Started Session 6 of User zuul.
Jan 21 10:34:02 np0005590810 python3[23384]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPbgiPnbgE+YOpgYE7ZvRNPP3elIhZd4BtPaOTOX6xMxTlOHSLhLNhQQ+mF/wtaL2Xe0gD4fqtYFYgOuIZYuk2M= zuul@np0005590809.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:34:03 np0005590810 python3[23566]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPbgiPnbgE+YOpgYE7ZvRNPP3elIhZd4BtPaOTOX6xMxTlOHSLhLNhQQ+mF/wtaL2Xe0gD4fqtYFYgOuIZYuk2M= zuul@np0005590809.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:34:04 np0005590810 python3[23884]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005590810.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 21 10:34:06 np0005590810 python3[24479]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPbgiPnbgE+YOpgYE7ZvRNPP3elIhZd4BtPaOTOX6xMxTlOHSLhLNhQQ+mF/wtaL2Xe0gD4fqtYFYgOuIZYuk2M= zuul@np0005590809.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 10:34:06 np0005590810 python3[24743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:34:07 np0005590810 python3[25019]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769009646.4812164-167-202710686162979/source _original_basename=tmpql_v2nso follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:34:08 np0005590810 python3[25374]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 21 10:34:08 np0005590810 systemd[1]: Starting Hostname Service...
Jan 21 10:34:08 np0005590810 systemd[1]: Started Hostname Service.
Jan 21 10:34:08 np0005590810 systemd-hostnamed[25478]: Changed pretty hostname to 'compute-0'
Jan 21 10:34:08 np0005590810 systemd-hostnamed[25478]: Hostname set to <compute-0> (static)
Jan 21 10:34:08 np0005590810 NetworkManager[7198]: <info>  [1769009648.2920] hostname: static hostname changed from "np0005590810.novalocal" to "compute-0"
Jan 21 10:34:08 np0005590810 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 10:34:08 np0005590810 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 10:34:09 np0005590810 systemd[1]: session-6.scope: Deactivated successfully.
Jan 21 10:34:09 np0005590810 systemd[1]: session-6.scope: Consumed 2.273s CPU time.
Jan 21 10:34:09 np0005590810 systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Jan 21 10:34:09 np0005590810 systemd-logind[795]: Removed session 6.
Jan 21 10:34:18 np0005590810 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 10:34:21 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 10:34:21 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 10:34:21 np0005590810 systemd[1]: man-db-cache-update.service: Consumed 58.559s CPU time.
Jan 21 10:34:21 np0005590810 systemd[1]: run-r8ef30848b60c4f80941e13f2117785c8.service: Deactivated successfully.
Jan 21 10:34:38 np0005590810 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 10:35:06 np0005590810 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 21 10:35:06 np0005590810 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 21 10:35:06 np0005590810 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 21 10:35:06 np0005590810 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 21 10:38:12 np0005590810 systemd-logind[795]: New session 7 of user zuul.
Jan 21 10:38:12 np0005590810 systemd[1]: Started Session 7 of User zuul.
Jan 21 10:38:13 np0005590810 python3[29999]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:38:14 np0005590810 python3[30115]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:38:15 np0005590810 python3[30188]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769009894.6752229-34001-247484927872142/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:38:15 np0005590810 python3[30214]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:38:16 np0005590810 python3[30287]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769009894.6752229-34001-247484927872142/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:38:16 np0005590810 python3[30313]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:38:16 np0005590810 python3[30386]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769009894.6752229-34001-247484927872142/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:38:16 np0005590810 python3[30412]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:38:17 np0005590810 python3[30485]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769009894.6752229-34001-247484927872142/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:38:17 np0005590810 python3[30511]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:38:17 np0005590810 python3[30584]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769009894.6752229-34001-247484927872142/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:38:18 np0005590810 python3[30610]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:38:18 np0005590810 python3[30683]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769009894.6752229-34001-247484927872142/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:38:18 np0005590810 python3[30709]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 10:38:19 np0005590810 python3[30782]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769009894.6752229-34001-247484927872142/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:38:31 np0005590810 python3[30840]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:43:30 np0005590810 systemd[1]: session-7.scope: Deactivated successfully.
Jan 21 10:43:30 np0005590810 systemd[1]: session-7.scope: Consumed 4.895s CPU time.
Jan 21 10:43:30 np0005590810 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Jan 21 10:43:30 np0005590810 systemd-logind[795]: Removed session 7.
Jan 21 10:52:30 np0005590810 systemd-logind[795]: New session 8 of user zuul.
Jan 21 10:52:30 np0005590810 systemd[1]: Started Session 8 of User zuul.
Jan 21 10:52:31 np0005590810 python3.9[31003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:52:32 np0005590810 python3.9[31184]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:52:43 np0005590810 systemd[1]: session-8.scope: Deactivated successfully.
Jan 21 10:52:43 np0005590810 systemd[1]: session-8.scope: Consumed 7.985s CPU time.
Jan 21 10:52:43 np0005590810 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Jan 21 10:52:43 np0005590810 systemd-logind[795]: Removed session 8.
Jan 21 10:53:01 np0005590810 systemd-logind[795]: New session 9 of user zuul.
Jan 21 10:53:01 np0005590810 systemd[1]: Started Session 9 of User zuul.
Jan 21 10:53:01 np0005590810 python3.9[31397]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 21 10:53:03 np0005590810 python3.9[31571]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:53:04 np0005590810 python3.9[31723]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:53:05 np0005590810 python3.9[31876]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 10:53:05 np0005590810 python3.9[32028]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:53:06 np0005590810 python3.9[32180]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:53:07 np0005590810 python3.9[32303]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769010786.1803856-172-33874706411529/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:53:08 np0005590810 python3.9[32455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:53:09 np0005590810 python3.9[32611]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:53:09 np0005590810 python3.9[32763]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:53:10 np0005590810 python3.9[32913]: ansible-ansible.builtin.service_facts Invoked
Jan 21 10:53:16 np0005590810 python3.9[33166]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:53:17 np0005590810 python3.9[33316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:53:18 np0005590810 python3.9[33470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:53:19 np0005590810 python3.9[33628]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 10:53:20 np0005590810 python3.9[33712]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:54:06 np0005590810 systemd[1]: Reloading.
Jan 21 10:54:06 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:54:06 np0005590810 systemd[1]: Starting dnf makecache...
Jan 21 10:54:06 np0005590810 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 21 10:54:06 np0005590810 dnf[33920]: Failed determining last makecache time.
Jan 21 10:54:06 np0005590810 dnf[33920]: delorean-openstack-barbican-42b4c41831408a8e323 149 kB/s | 3.0 kB     00:00
Jan 21 10:54:06 np0005590810 dnf[33920]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 176 kB/s | 3.0 kB     00:00
Jan 21 10:54:06 np0005590810 dnf[33920]: delorean-openstack-cinder-1c00d6490d88e436f26ef 174 kB/s | 3.0 kB     00:00
Jan 21 10:54:06 np0005590810 dnf[33920]: delorean-python-stevedore-c4acc5639fd2329372142 178 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-python-cloudkitty-tests-tempest-2c80f8 187 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-os-refresh-config-9bfc52b5049be2d8de61 169 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 164 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 systemd[1]: Reloading.
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-python-designate-tests-tempest-347fdbc 166 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-openstack-glance-1fd12c29b339f30fe823e 169 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 145 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-openstack-manila-3c01b7181572c95dac462 173 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-python-whitebox-neutron-tests-tempest- 163 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-openstack-octavia-ba397f07a7331190208c 182 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-openstack-watcher-c014f81a8647287f6dcc 196 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-ansible-config_template-5ccaa22121a7ff 194 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 176 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-openstack-swift-dc98a8463506ac520c469a 175 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-python-tempestconf-8515371b7cceebd4282 175 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 21 10:54:07 np0005590810 dnf[33920]: delorean-openstack-heat-ui-013accbfd179753bc3f0 179 kB/s | 3.0 kB     00:00
Jan 21 10:54:07 np0005590810 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 21 10:54:07 np0005590810 systemd[1]: Reloading.
Jan 21 10:54:07 np0005590810 dnf[33920]: CentOS Stream 9 - BaseOS                         62 kB/s | 6.7 kB     00:00
Jan 21 10:54:07 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:54:07 np0005590810 dnf[33920]: CentOS Stream 9 - AppStream                      70 kB/s | 6.8 kB     00:00
Jan 21 10:54:07 np0005590810 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 21 10:54:07 np0005590810 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Jan 21 10:54:07 np0005590810 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Jan 21 10:54:07 np0005590810 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Jan 21 10:54:07 np0005590810 dnf[33920]: CentOS Stream 9 - CRB                            66 kB/s | 6.6 kB     00:00
Jan 21 10:54:07 np0005590810 dnf[33920]: CentOS Stream 9 - Extras packages                79 kB/s | 7.3 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: dlrn-antelope-testing                           5.3 kB/s | 3.0 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: dlrn-antelope-build-deps                        179 kB/s | 3.0 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: centos9-rabbitmq                                121 kB/s | 3.0 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: centos9-storage                                 137 kB/s | 3.0 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: centos9-opstools                                149 kB/s | 3.0 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: NFV SIG OpenvSwitch                             150 kB/s | 3.0 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: repo-setup-centos-appstream                     187 kB/s | 4.4 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: repo-setup-centos-baseos                        142 kB/s | 3.9 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: repo-setup-centos-highavailability              170 kB/s | 3.9 kB     00:00
Jan 21 10:54:08 np0005590810 dnf[33920]: repo-setup-centos-powertools                    145 kB/s | 4.3 kB     00:00
Jan 21 10:54:09 np0005590810 dnf[33920]: Extra Packages for Enterprise Linux 9 - x86_64  194 kB/s |  30 kB     00:00
Jan 21 10:54:09 np0005590810 dnf[33920]: Metadata cache created.
Jan 21 10:54:09 np0005590810 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 21 10:54:09 np0005590810 systemd[1]: Finished dnf makecache.
Jan 21 10:54:09 np0005590810 systemd[1]: dnf-makecache.service: Consumed 1.634s CPU time.
Jan 21 10:55:11 np0005590810 kernel: SELinux:  Converting 2722 SID table entries...
Jan 21 10:55:11 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 10:55:11 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 10:55:11 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 10:55:11 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 10:55:11 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 10:55:11 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 10:55:11 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 10:55:11 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 21 10:55:11 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 10:55:11 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 10:55:11 np0005590810 systemd[1]: Reloading.
Jan 21 10:55:11 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:55:11 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 10:55:12 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 10:55:12 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 10:55:12 np0005590810 systemd[1]: man-db-cache-update.service: Consumed 1.223s CPU time.
Jan 21 10:55:12 np0005590810 systemd[1]: run-rb17036b5f89b4ae7afaae09a9ba5c911.service: Deactivated successfully.
Jan 21 10:55:45 np0005590810 python3.9[35271]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:55:47 np0005590810 python3.9[35552]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 21 10:55:48 np0005590810 python3.9[35704]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 21 10:55:52 np0005590810 python3.9[35858]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:55:53 np0005590810 python3.9[36010]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 21 10:55:57 np0005590810 python3.9[36162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:56:00 np0005590810 python3.9[36314]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:56:01 np0005590810 python3.9[36437]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769010958.008015-661-265134297025560/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1cea5a8eed1224d858018fe9be73f8229d34ef3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:56:03 np0005590810 python3.9[36589]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 10:56:04 np0005590810 python3.9[36741]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:56:10 np0005590810 python3.9[36894]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:56:11 np0005590810 python3.9[37046]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 21 10:56:11 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 10:56:12 np0005590810 python3.9[37200]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 10:56:13 np0005590810 python3.9[37358]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 10:56:14 np0005590810 python3.9[37518]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 21 10:56:15 np0005590810 python3.9[37671]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 10:56:17 np0005590810 python3.9[37829]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 21 10:56:18 np0005590810 python3.9[37981]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:56:25 np0005590810 python3.9[38134]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:56:25 np0005590810 python3.9[38286]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:56:26 np0005590810 python3.9[38409]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769010985.4313357-1018-174009270550426/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:56:27 np0005590810 python3.9[38561]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 10:56:27 np0005590810 systemd[1]: Starting Load Kernel Modules...
Jan 21 10:56:27 np0005590810 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 21 10:56:27 np0005590810 kernel: Bridge firewalling registered
Jan 21 10:56:27 np0005590810 systemd-modules-load[38565]: Inserted module 'br_netfilter'
Jan 21 10:56:27 np0005590810 systemd[1]: Finished Load Kernel Modules.
Jan 21 10:56:28 np0005590810 python3.9[38720]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:56:29 np0005590810 python3.9[38843]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769010988.0629852-1087-74735240498725/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:56:30 np0005590810 python3.9[38995]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:56:33 np0005590810 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Jan 21 10:56:33 np0005590810 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Jan 21 10:56:33 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 10:56:33 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 10:56:33 np0005590810 systemd[1]: Reloading.
Jan 21 10:56:33 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:56:33 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 10:56:37 np0005590810 python3.9[42385]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 10:56:37 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 10:56:37 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 10:56:37 np0005590810 systemd[1]: man-db-cache-update.service: Consumed 5.059s CPU time.
Jan 21 10:56:37 np0005590810 systemd[1]: run-rf4369be5a5774019a494f852f22469bf.service: Deactivated successfully.
Jan 21 10:56:38 np0005590810 python3.9[42863]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 21 10:56:38 np0005590810 python3.9[43013]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 10:56:39 np0005590810 python3.9[43165]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:56:39 np0005590810 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 10:56:40 np0005590810 systemd[1]: Starting Authorization Manager...
Jan 21 10:56:40 np0005590810 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 10:56:40 np0005590810 polkitd[43382]: Started polkitd version 0.117
Jan 21 10:56:40 np0005590810 systemd[1]: Started Authorization Manager.
Jan 21 10:56:41 np0005590810 python3.9[43552]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 10:56:41 np0005590810 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 21 10:56:41 np0005590810 systemd[1]: tuned.service: Deactivated successfully.
Jan 21 10:56:41 np0005590810 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 21 10:56:41 np0005590810 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 10:56:41 np0005590810 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 10:56:42 np0005590810 python3.9[43714]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 21 10:56:46 np0005590810 python3.9[43866]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 10:56:47 np0005590810 systemd[1]: Reloading.
Jan 21 10:56:47 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:56:48 np0005590810 python3.9[44056]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 10:56:48 np0005590810 systemd[1]: Reloading.
Jan 21 10:56:48 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:56:49 np0005590810 python3.9[44244]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:56:49 np0005590810 python3.9[44397]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:56:49 np0005590810 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 21 10:56:50 np0005590810 python3.9[44550]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:56:52 np0005590810 python3.9[44712]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:56:53 np0005590810 python3.9[44865]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 10:56:53 np0005590810 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 21 10:56:53 np0005590810 systemd[1]: Stopped Apply Kernel Variables.
Jan 21 10:56:53 np0005590810 systemd[1]: Stopping Apply Kernel Variables...
Jan 21 10:56:53 np0005590810 systemd[1]: Starting Apply Kernel Variables...
Jan 21 10:56:53 np0005590810 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 21 10:56:53 np0005590810 systemd[1]: Finished Apply Kernel Variables.
Jan 21 10:56:54 np0005590810 systemd[1]: session-9.scope: Deactivated successfully.
Jan 21 10:56:54 np0005590810 systemd[1]: session-9.scope: Consumed 2min 12.067s CPU time.
Jan 21 10:56:54 np0005590810 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Jan 21 10:56:54 np0005590810 systemd-logind[795]: Removed session 9.
Jan 21 10:57:02 np0005590810 systemd-logind[795]: New session 10 of user zuul.
Jan 21 10:57:02 np0005590810 systemd[1]: Started Session 10 of User zuul.
Jan 21 10:57:03 np0005590810 python3.9[45048]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:57:04 np0005590810 python3.9[45204]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 21 10:57:05 np0005590810 python3.9[45357]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 10:57:06 np0005590810 python3.9[45515]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 10:57:08 np0005590810 python3.9[45675]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 10:57:09 np0005590810 python3.9[45759]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 10:57:12 np0005590810 python3.9[45923]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:57:24 np0005590810 kernel: SELinux:  Converting 2735 SID table entries...
Jan 21 10:57:24 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 10:57:24 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 10:57:24 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 10:57:24 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 10:57:24 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 10:57:24 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 10:57:24 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 10:57:24 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 21 10:57:24 np0005590810 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 21 10:57:25 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 10:57:25 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 10:57:25 np0005590810 systemd[1]: Reloading.
Jan 21 10:57:26 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:57:26 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 10:57:26 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 10:57:26 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 10:57:26 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 10:57:26 np0005590810 systemd[1]: run-r2e99c45cd88242088e5f9b698d9d1ebf.service: Deactivated successfully.
Jan 21 10:57:41 np0005590810 python3.9[47022]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 10:57:41 np0005590810 systemd[1]: Reloading.
Jan 21 10:57:41 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:57:41 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 10:57:41 np0005590810 systemd[1]: Starting Open vSwitch Database Unit...
Jan 21 10:57:41 np0005590810 chown[47063]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 21 10:57:41 np0005590810 ovs-ctl[47068]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 21 10:57:41 np0005590810 ovs-ctl[47068]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 21 10:57:41 np0005590810 ovs-ctl[47068]: Starting ovsdb-server [  OK  ]
Jan 21 10:57:41 np0005590810 ovs-vsctl[47117]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 21 10:57:41 np0005590810 ovs-vsctl[47136]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"f6e8413f-2ba2-49cb-8bd6-36b8085ce01c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 21 10:57:41 np0005590810 ovs-ctl[47068]: Configuring Open vSwitch system IDs [  OK  ]
Jan 21 10:57:41 np0005590810 ovs-ctl[47068]: Enabling remote OVSDB managers [  OK  ]
Jan 21 10:57:41 np0005590810 systemd[1]: Started Open vSwitch Database Unit.
Jan 21 10:57:41 np0005590810 ovs-vsctl[47142]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 21 10:57:41 np0005590810 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 21 10:57:41 np0005590810 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 21 10:57:42 np0005590810 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 21 10:57:42 np0005590810 kernel: openvswitch: Open vSwitch switching datapath
Jan 21 10:57:42 np0005590810 ovs-ctl[47187]: Inserting openvswitch module [  OK  ]
Jan 21 10:57:42 np0005590810 ovs-ctl[47156]: Starting ovs-vswitchd [  OK  ]
Jan 21 10:57:42 np0005590810 ovs-vsctl[47205]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 21 10:57:42 np0005590810 ovs-ctl[47156]: Enabling remote OVSDB managers [  OK  ]
Jan 21 10:57:42 np0005590810 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 21 10:57:42 np0005590810 systemd[1]: Starting Open vSwitch...
Jan 21 10:57:42 np0005590810 systemd[1]: Finished Open vSwitch.
Jan 21 10:57:43 np0005590810 python3.9[47356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:57:44 np0005590810 python3.9[47508]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 21 10:57:45 np0005590810 kernel: SELinux:  Converting 2749 SID table entries...
Jan 21 10:57:45 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 10:57:45 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 10:57:45 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 10:57:45 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 10:57:45 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 10:57:45 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 10:57:45 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 10:57:47 np0005590810 python3.9[47663]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:57:48 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 21 10:57:48 np0005590810 python3.9[47821]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:57:51 np0005590810 python3.9[47974]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:57:52 np0005590810 python3.9[48261]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 21 10:57:53 np0005590810 python3.9[48411]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 10:57:54 np0005590810 python3.9[48565]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:57:56 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 10:57:56 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 10:57:56 np0005590810 systemd[1]: Reloading.
Jan 21 10:57:56 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:57:56 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 10:57:56 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 10:57:57 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 10:57:57 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 10:57:57 np0005590810 systemd[1]: run-r379296adc92c4405bb1f1815e06b9dbf.service: Deactivated successfully.
Jan 21 10:58:00 np0005590810 python3.9[48881]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 10:58:00 np0005590810 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 21 10:58:00 np0005590810 systemd[1]: Stopped Network Manager Wait Online.
Jan 21 10:58:00 np0005590810 systemd[1]: Stopping Network Manager Wait Online...
Jan 21 10:58:00 np0005590810 systemd[1]: Stopping Network Manager...
Jan 21 10:58:00 np0005590810 NetworkManager[7198]: <info>  [1769011080.2045] caught SIGTERM, shutting down normally.
Jan 21 10:58:00 np0005590810 NetworkManager[7198]: <info>  [1769011080.2063] dhcp4 (eth0): canceled DHCP transaction
Jan 21 10:58:00 np0005590810 NetworkManager[7198]: <info>  [1769011080.2064] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:58:00 np0005590810 NetworkManager[7198]: <info>  [1769011080.2064] dhcp4 (eth0): state changed no lease
Jan 21 10:58:00 np0005590810 NetworkManager[7198]: <info>  [1769011080.2068] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 10:58:00 np0005590810 NetworkManager[7198]: <info>  [1769011080.2145] exiting (success)
Jan 21 10:58:00 np0005590810 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 10:58:00 np0005590810 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 10:58:00 np0005590810 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 21 10:58:00 np0005590810 systemd[1]: Stopped Network Manager.
Jan 21 10:58:00 np0005590810 systemd[1]: NetworkManager.service: Consumed 10.400s CPU time, 4.4M memory peak, read 0B from disk, written 23.5K to disk.
Jan 21 10:58:00 np0005590810 systemd[1]: Starting Network Manager...
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.2866] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:270b975d-78fa-4cd0-8c03-59ef0f09243d)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.2870] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.2925] manager[0x561822f43000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 10:58:00 np0005590810 systemd[1]: Starting Hostname Service...
Jan 21 10:58:00 np0005590810 systemd[1]: Started Hostname Service.
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3755] hostname: hostname: using hostnamed
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3756] hostname: static hostname changed from (none) to "compute-0"
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3759] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3767] manager[0x561822f43000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3767] manager[0x561822f43000]: rfkill: WWAN hardware radio set enabled
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3789] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3798] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3799] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3799] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3800] manager: Networking is enabled by state file
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3802] settings: Loaded settings plugin: keyfile (internal)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3805] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3829] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3837] dhcp: init: Using DHCP client 'internal'
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3839] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3844] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3848] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3855] device (lo): Activation: starting connection 'lo' (253b81e5-ac75-4452-a3e4-15be611c9139)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3861] device (eth0): carrier: link connected
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3865] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3869] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3869] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3876] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3882] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3888] device (eth1): carrier: link connected
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3892] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3897] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (6f25b135-dfb2-58ee-a797-68bd03650dcd) (indicated)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3897] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3901] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3912] device (eth1): Activation: starting connection 'ci-private-network' (6f25b135-dfb2-58ee-a797-68bd03650dcd)
Jan 21 10:58:00 np0005590810 systemd[1]: Started Network Manager.
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3918] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3925] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3940] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3943] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3945] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3949] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3951] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3954] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3962] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3969] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3973] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3983] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.3996] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4010] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4018] dhcp4 (eth0): state changed new lease, address=38.129.56.235
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4022] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4026] device (lo): Activation: successful, device activated.
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4034] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 10:58:00 np0005590810 systemd[1]: Starting Network Manager Wait Online...
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4103] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4110] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4121] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4125] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4128] device (eth1): Activation: successful, device activated.
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4139] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4140] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4143] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4145] device (eth0): Activation: successful, device activated.
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4150] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 10:58:00 np0005590810 NetworkManager[48894]: <info>  [1769011080.4152] manager: startup complete
Jan 21 10:58:00 np0005590810 systemd[1]: Finished Network Manager Wait Online.
Jan 21 10:58:01 np0005590810 python3.9[49107]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:58:08 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 10:58:08 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 10:58:08 np0005590810 systemd[1]: Reloading.
Jan 21 10:58:08 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 10:58:09 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 10:58:09 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 10:58:09 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 10:58:09 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 10:58:09 np0005590810 systemd[1]: run-r9d46116694c44c5d8a2c2267d9d85789.service: Deactivated successfully.
Jan 21 10:58:10 np0005590810 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 10:58:13 np0005590810 python3.9[49569]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 10:58:14 np0005590810 python3.9[49721]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:15 np0005590810 python3.9[49875]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:16 np0005590810 python3.9[50027]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:17 np0005590810 python3.9[50179]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:17 np0005590810 python3.9[50331]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:18 np0005590810 python3.9[50483]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:58:19 np0005590810 python3.9[50606]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011097.9857426-642-122248029157811/.source _original_basename=._4zyngdd follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:19 np0005590810 python3.9[50758]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:22 np0005590810 python3.9[50910]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 21 10:58:22 np0005590810 python3.9[51062]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:25 np0005590810 python3.9[51489]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 21 10:58:26 np0005590810 ansible-async_wrapper.py[51664]: Invoked with j254728850445 300 /home/zuul/.ansible/tmp/ansible-tmp-1769011105.4011126-840-212099752762063/AnsiballZ_edpm_os_net_config.py _
Jan 21 10:58:26 np0005590810 ansible-async_wrapper.py[51667]: Starting module and watcher
Jan 21 10:58:26 np0005590810 ansible-async_wrapper.py[51667]: Start watching 51668 (300)
Jan 21 10:58:26 np0005590810 ansible-async_wrapper.py[51668]: Start module (51668)
Jan 21 10:58:26 np0005590810 ansible-async_wrapper.py[51664]: Return async_wrapper task started.
Jan 21 10:58:26 np0005590810 python3.9[51669]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 21 10:58:27 np0005590810 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 21 10:58:27 np0005590810 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 21 10:58:27 np0005590810 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 21 10:58:27 np0005590810 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 21 10:58:27 np0005590810 kernel: cfg80211: failed to load regulatory.db
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.1590] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.1608] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2069] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2071] audit: op="connection-add" uuid="1cdbb931-03b7-488a-9483-9818c73ae055" name="br-ex-br" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2085] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2087] audit: op="connection-add" uuid="0c3f94ab-6387-49ad-894a-dff90649f2cd" name="br-ex-port" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2097] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2098] audit: op="connection-add" uuid="38800b1e-0bd8-42a3-ac6d-662e79f3a328" name="eth1-port" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2108] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2110] audit: op="connection-add" uuid="a7802b00-49a3-411d-9d84-edcba1d36210" name="vlan20-port" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2120] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2122] audit: op="connection-add" uuid="7e5257fd-f5e7-45b1-9ad2-447e3b9c2eed" name="vlan21-port" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2131] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2133] audit: op="connection-add" uuid="a37abc31-d792-4b36-b0a2-dc031de6c537" name="vlan22-port" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2144] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2145] audit: op="connection-add" uuid="6d6f6b8a-481f-4425-939e-d5b2b439f973" name="vlan23-port" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2163] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2177] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.2178] audit: op="connection-add" uuid="5e878e9b-c36b-4537-acfa-3ad17a7f3d38" name="br-ex-if" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3361] audit: op="connection-update" uuid="6f25b135-dfb2-58ee-a797-68bd03650dcd" name="ci-private-network" args="connection.controller,connection.slave-type,connection.port-type,connection.timestamp,connection.master,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.never-default,ipv4.dns,ipv4.routing-rules,ovs-interface.type,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ovs-external-ids.data" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3380] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3382] audit: op="connection-add" uuid="29dda54c-675b-4609-a095-8ee1eaf09138" name="vlan20-if" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3397] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3399] audit: op="connection-add" uuid="ab0d49e4-fc33-4ac5-9b61-f3d0b4f846de" name="vlan21-if" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3413] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3415] audit: op="connection-add" uuid="e595fef5-04d5-40ff-b06a-a3d03421bbb1" name="vlan22-if" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3432] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3434] audit: op="connection-add" uuid="c56ff8f7-9c1a-4cd0-a1e4-15080bca10f5" name="vlan23-if" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3448] audit: op="connection-delete" uuid="1a61991b-b038-3a52-8990-88651b7c1e06" name="Wired connection 1" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3462] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3466] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3472] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3477] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1cdbb931-03b7-488a-9483-9818c73ae055)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3478] audit: op="connection-activate" uuid="1cdbb931-03b7-488a-9483-9818c73ae055" name="br-ex-br" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3480] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3481] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3486] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3489] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (0c3f94ab-6387-49ad-894a-dff90649f2cd)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3491] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3493] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3498] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3504] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (38800b1e-0bd8-42a3-ac6d-662e79f3a328)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3506] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3508] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3514] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3519] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (a7802b00-49a3-411d-9d84-edcba1d36210)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3521] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3523] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3529] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3533] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7e5257fd-f5e7-45b1-9ad2-447e3b9c2eed)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3535] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3537] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3543] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3547] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a37abc31-d792-4b36-b0a2-dc031de6c537)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3550] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3551] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3557] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3562] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (6d6f6b8a-481f-4425-939e-d5b2b439f973)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3563] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3567] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3569] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3576] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3578] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3581] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3587] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (5e878e9b-c36b-4537-acfa-3ad17a7f3d38)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3589] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3593] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3596] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3598] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3600] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3611] device (eth1): disconnecting for new activation request.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3613] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3623] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3625] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3626] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3629] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3630] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3633] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3637] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (29dda54c-675b-4609-a095-8ee1eaf09138)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3638] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3642] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3645] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3647] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3650] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3652] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3656] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3661] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (ab0d49e4-fc33-4ac5-9b61-f3d0b4f846de)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3662] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3666] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3669] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3670] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3674] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3676] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3680] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3685] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e595fef5-04d5-40ff-b06a-a3d03421bbb1)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3687] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3690] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3693] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3695] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3698] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <warn>  [1769011108.3700] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3704] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3708] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (c56ff8f7-9c1a-4cd0-a1e4-15080bca10f5)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3710] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3713] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3716] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3718] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3720] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3732] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3735] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3739] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3741] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3748] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3751] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3754] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3757] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3759] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3763] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3767] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3770] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3771] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 kernel: ovs-system: entered promiscuous mode
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3777] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3781] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3784] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3786] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3789] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3792] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3796] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3797] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 systemd-udevd[51675]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3835] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3844] dhcp4 (eth0): canceled DHCP transaction
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3844] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3844] dhcp4 (eth0): state changed no lease
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.3847] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.4005] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 kernel: Timeout policy base is empty
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.4024] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51670 uid=0 result="fail" reason="Device is not activated"
Jan 21 10:58:28 np0005590810 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.4028] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 10:58:28 np0005590810 kernel: br-ex: entered promiscuous mode
Jan 21 10:58:28 np0005590810 kernel: vlan21: entered promiscuous mode
Jan 21 10:58:28 np0005590810 kernel: vlan20: entered promiscuous mode
Jan 21 10:58:28 np0005590810 systemd-udevd[51674]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.6336] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.6341] dhcp4 (eth0): state changed new lease, address=38.129.56.235
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.6356] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.6363] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.6372] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.6378] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.6385] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 21 10:58:28 np0005590810 kernel: vlan22: entered promiscuous mode
Jan 21 10:58:28 np0005590810 kernel: vlan23: entered promiscuous mode
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9036] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9190] device (eth1): Activation: starting connection 'ci-private-network' (6f25b135-dfb2-58ee-a797-68bd03650dcd)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9196] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9198] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9203] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9205] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9207] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9209] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9211] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9213] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9220] device (eth1): disconnecting for new activation request.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9221] audit: op="connection-activate" uuid="6f25b135-dfb2-58ee-a797-68bd03650dcd" name="ci-private-network" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9240] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9246] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9249] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9253] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9258] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9262] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9267] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9271] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9277] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9281] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9285] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9290] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9294] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9297] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9301] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9305] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9333] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51670 uid=0 result="success"
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9334] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9342] device (eth1): Activation: starting connection 'ci-private-network' (6f25b135-dfb2-58ee-a797-68bd03650dcd)
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9348] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9375] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9380] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9391] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9400] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9424] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9432] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9439] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 21 10:58:28 np0005590810 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9445] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9454] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9466] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9471] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9477] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9483] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9484] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9489] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9494] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9499] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9504] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9510] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9511] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9512] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9521] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9526] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9533] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9538] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9544] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 10:58:28 np0005590810 NetworkManager[48894]: <info>  [1769011108.9549] device (eth1): Activation: successful, device activated.
Jan 21 10:58:29 np0005590810 python3.9[52032]: ansible-ansible.legacy.async_status Invoked with jid=j254728850445.51664 mode=status _async_dir=/root/.ansible_async
Jan 21 10:58:30 np0005590810 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 10:58:30 np0005590810 NetworkManager[48894]: <info>  [1769011110.6380] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51670 uid=0 result="success"
Jan 21 10:58:30 np0005590810 NetworkManager[48894]: <info>  [1769011110.8170] checkpoint[0x561822f18950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 21 10:58:30 np0005590810 NetworkManager[48894]: <info>  [1769011110.8172] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51670 uid=0 result="success"
Jan 21 10:58:31 np0005590810 NetworkManager[48894]: <info>  [1769011111.0880] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51670 uid=0 result="success"
Jan 21 10:58:31 np0005590810 NetworkManager[48894]: <info>  [1769011111.0891] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51670 uid=0 result="success"
Jan 21 10:58:31 np0005590810 ansible-async_wrapper.py[51667]: 51668 still running (300)
Jan 21 10:58:31 np0005590810 NetworkManager[48894]: <info>  [1769011111.5053] audit: op="networking-control" arg="global-dns-configuration" pid=51670 uid=0 result="success"
Jan 21 10:58:31 np0005590810 NetworkManager[48894]: <info>  [1769011111.5591] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 21 10:58:31 np0005590810 NetworkManager[48894]: <info>  [1769011111.7772] audit: op="networking-control" arg="global-dns-configuration" pid=51670 uid=0 result="success"
Jan 21 10:58:31 np0005590810 NetworkManager[48894]: <info>  [1769011111.7808] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51670 uid=0 result="success"
Jan 21 10:58:31 np0005590810 NetworkManager[48894]: <info>  [1769011111.9554] checkpoint[0x561822f18a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 21 10:58:31 np0005590810 NetworkManager[48894]: <info>  [1769011111.9560] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51670 uid=0 result="success"
Jan 21 10:58:32 np0005590810 ansible-async_wrapper.py[51668]: Module complete (51668)
Jan 21 10:58:33 np0005590810 python3.9[52140]: ansible-ansible.legacy.async_status Invoked with jid=j254728850445.51664 mode=status _async_dir=/root/.ansible_async
Jan 21 10:58:33 np0005590810 python3.9[52240]: ansible-ansible.legacy.async_status Invoked with jid=j254728850445.51664 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 10:58:34 np0005590810 python3.9[52392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:58:35 np0005590810 python3.9[52515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011114.326406-921-62010084455783/.source.returncode _original_basename=.ke544shb follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:36 np0005590810 ansible-async_wrapper.py[51667]: Done in kid B.
Jan 21 10:58:36 np0005590810 python3.9[52667]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:58:36 np0005590810 python3.9[52791]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011115.7482271-969-220652180599482/.source.cfg _original_basename=.kvurjb2c follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:58:38 np0005590810 python3.9[52943]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 10:58:38 np0005590810 systemd[1]: Reloading Network Manager...
Jan 21 10:58:38 np0005590810 NetworkManager[48894]: <info>  [1769011118.1412] audit: op="reload" arg="0" pid=52947 uid=0 result="success"
Jan 21 10:58:38 np0005590810 NetworkManager[48894]: <info>  [1769011118.1421] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 21 10:58:38 np0005590810 systemd[1]: Reloaded Network Manager.
Jan 21 10:58:38 np0005590810 systemd[1]: session-10.scope: Deactivated successfully.
Jan 21 10:58:38 np0005590810 systemd[1]: session-10.scope: Consumed 48.832s CPU time.
Jan 21 10:58:38 np0005590810 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Jan 21 10:58:38 np0005590810 systemd-logind[795]: Removed session 10.
Jan 21 10:58:44 np0005590810 systemd-logind[795]: New session 11 of user zuul.
Jan 21 10:58:44 np0005590810 systemd[1]: Started Session 11 of User zuul.
Jan 21 10:58:45 np0005590810 python3.9[53131]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:58:46 np0005590810 python3.9[53286]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 10:58:48 np0005590810 python3.9[53479]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:58:48 np0005590810 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 10:58:48 np0005590810 systemd[1]: session-11.scope: Deactivated successfully.
Jan 21 10:58:48 np0005590810 systemd[1]: session-11.scope: Consumed 2.183s CPU time.
Jan 21 10:58:48 np0005590810 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Jan 21 10:58:48 np0005590810 systemd-logind[795]: Removed session 11.
Jan 21 10:58:57 np0005590810 systemd-logind[795]: New session 12 of user zuul.
Jan 21 10:58:57 np0005590810 systemd[1]: Started Session 12 of User zuul.
Jan 21 10:58:58 np0005590810 python3.9[53662]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:58:59 np0005590810 python3.9[53817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:59:01 np0005590810 python3.9[53973]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 10:59:02 np0005590810 python3.9[54057]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:59:05 np0005590810 python3.9[54210]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 10:59:07 np0005590810 python3.9[54405]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:59:07 np0005590810 python3.9[54557]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:59:08 np0005590810 podman[54558]: 2026-01-21 15:59:08.475758307 +0000 UTC m=+0.527130247 system refresh
Jan 21 10:59:09 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 10:59:11 np0005590810 python3.9[54720]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:59:11 np0005590810 python3.9[54843]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011150.630385-192-147019296664958/.source.json follow=False _original_basename=podman_network_config.j2 checksum=08e09cacc3142d7646654f43f5df5529b0166966 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:59:13 np0005590810 python3.9[54995]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:59:13 np0005590810 python3.9[55118]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769011152.6540496-237-1547195823661/.source.conf follow=False _original_basename=registries.conf.j2 checksum=aa111deb4a4618b8b0ade5e08aa989c05f1c31ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:59:14 np0005590810 python3.9[55270]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:59:15 np0005590810 python3.9[55422]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:59:15 np0005590810 python3.9[55574]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:59:16 np0005590810 python3.9[55726]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 10:59:17 np0005590810 python3.9[55878]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:59:20 np0005590810 python3.9[56031]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 10:59:21 np0005590810 python3.9[56185]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 10:59:22 np0005590810 python3.9[56337]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 10:59:23 np0005590810 python3.9[56489]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 10:59:24 np0005590810 python3.9[56642]: ansible-service_facts Invoked
Jan 21 10:59:24 np0005590810 network[56659]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 10:59:24 np0005590810 network[56660]: 'network-scripts' will be removed from distribution in near future.
Jan 21 10:59:24 np0005590810 network[56661]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 10:59:29 np0005590810 python3.9[57113]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 10:59:32 np0005590810 python3.9[57266]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 21 10:59:33 np0005590810 python3.9[57418]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:59:34 np0005590810 python3.9[57543]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011173.2802093-669-45199729236382/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:59:35 np0005590810 python3.9[57697]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:59:35 np0005590810 python3.9[57822]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011174.6542912-714-105155167697742/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:59:37 np0005590810 python3.9[57976]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:59:39 np0005590810 python3.9[58130]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 10:59:40 np0005590810 python3.9[58214]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 10:59:42 np0005590810 python3.9[58368]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 10:59:43 np0005590810 python3.9[58452]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 10:59:43 np0005590810 chronyd[791]: chronyd exiting
Jan 21 10:59:43 np0005590810 systemd[1]: Stopping NTP client/server...
Jan 21 10:59:43 np0005590810 systemd[1]: chronyd.service: Deactivated successfully.
Jan 21 10:59:43 np0005590810 systemd[1]: Stopped NTP client/server.
Jan 21 10:59:43 np0005590810 systemd[1]: Starting NTP client/server...
Jan 21 10:59:43 np0005590810 chronyd[58461]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 21 10:59:43 np0005590810 chronyd[58461]: Frequency -31.384 +/- 0.307 ppm read from /var/lib/chrony/drift
Jan 21 10:59:43 np0005590810 chronyd[58461]: Loaded seccomp filter (level 2)
Jan 21 10:59:43 np0005590810 systemd[1]: Started NTP client/server.
Jan 21 10:59:43 np0005590810 systemd[1]: session-12.scope: Deactivated successfully.
Jan 21 10:59:43 np0005590810 systemd[1]: session-12.scope: Consumed 24.000s CPU time.
Jan 21 10:59:43 np0005590810 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Jan 21 10:59:43 np0005590810 systemd-logind[795]: Removed session 12.
Jan 21 10:59:50 np0005590810 systemd-logind[795]: New session 13 of user zuul.
Jan 21 10:59:50 np0005590810 systemd[1]: Started Session 13 of User zuul.
Jan 21 10:59:50 np0005590810 python3.9[58642]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:59:51 np0005590810 python3.9[58794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 10:59:52 np0005590810 python3.9[58917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011191.0958025-57-29188348194001/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 10:59:52 np0005590810 systemd[1]: session-13.scope: Deactivated successfully.
Jan 21 10:59:52 np0005590810 systemd[1]: session-13.scope: Consumed 1.488s CPU time.
Jan 21 10:59:52 np0005590810 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Jan 21 10:59:52 np0005590810 systemd-logind[795]: Removed session 13.
Jan 21 10:59:58 np0005590810 systemd-logind[795]: New session 14 of user zuul.
Jan 21 10:59:58 np0005590810 systemd[1]: Started Session 14 of User zuul.
Jan 21 10:59:59 np0005590810 python3.9[59095]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:00:00 np0005590810 python3.9[59251]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:01 np0005590810 python3.9[59426]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:02 np0005590810 python3.9[59549]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769011200.9512527-78-179083633391469/.source.json _original_basename=.fwpkrllj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:03 np0005590810 python3.9[59701]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:04 np0005590810 python3.9[59824]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011203.4491384-147-148100856680897/.source _original_basename=.tf9ljp17 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:05 np0005590810 python3.9[59976]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:00:05 np0005590810 python3.9[60128]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:07 np0005590810 python3.9[60251]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769011205.4701958-219-174977490255920/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:00:08 np0005590810 python3.9[60403]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:08 np0005590810 python3.9[60526]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769011207.5968008-219-74749804750251/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:00:09 np0005590810 python3.9[60678]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:09 np0005590810 python3.9[60830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:10 np0005590810 python3.9[60953]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011209.451146-330-117287798911930/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:11 np0005590810 python3.9[61105]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:11 np0005590810 python3.9[61228]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011210.7399616-375-26582760143336/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:14 np0005590810 python3.9[61380]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:00:14 np0005590810 systemd[1]: Reloading.
Jan 21 11:00:14 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:00:14 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:00:14 np0005590810 systemd[1]: Reloading.
Jan 21 11:00:14 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:00:14 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:00:14 np0005590810 systemd[1]: Starting EDPM Container Shutdown...
Jan 21 11:00:14 np0005590810 systemd[1]: Finished EDPM Container Shutdown.
Jan 21 11:00:15 np0005590810 python3.9[61606]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:16 np0005590810 python3.9[61729]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011215.382255-444-147602196970384/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:17 np0005590810 python3.9[61881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:17 np0005590810 python3.9[62004]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011216.7902367-489-167616320564402/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:18 np0005590810 python3.9[62156]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:00:18 np0005590810 systemd[1]: Reloading.
Jan 21 11:00:18 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:00:18 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:00:18 np0005590810 systemd[1]: Reloading.
Jan 21 11:00:18 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:00:18 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:00:19 np0005590810 systemd[1]: Starting Create netns directory...
Jan 21 11:00:19 np0005590810 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 11:00:19 np0005590810 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 11:00:19 np0005590810 systemd[1]: Finished Create netns directory.
Jan 21 11:00:20 np0005590810 python3.9[62384]: ansible-ansible.builtin.service_facts Invoked
Jan 21 11:00:20 np0005590810 network[62401]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 11:00:20 np0005590810 network[62402]: 'network-scripts' will be removed from distribution in near future.
Jan 21 11:00:20 np0005590810 network[62403]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 11:00:23 np0005590810 python3.9[62665]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:00:23 np0005590810 systemd[1]: Reloading.
Jan 21 11:00:24 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:00:24 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:00:24 np0005590810 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 21 11:00:24 np0005590810 iptables.init[62706]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 21 11:00:24 np0005590810 iptables.init[62706]: iptables: Flushing firewall rules: [  OK  ]
Jan 21 11:00:24 np0005590810 systemd[1]: iptables.service: Deactivated successfully.
Jan 21 11:00:24 np0005590810 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 21 11:00:25 np0005590810 python3.9[62902]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:00:26 np0005590810 python3.9[63056]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:00:26 np0005590810 systemd[1]: Reloading.
Jan 21 11:00:26 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:00:26 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:00:26 np0005590810 systemd[1]: Starting Netfilter Tables...
Jan 21 11:00:26 np0005590810 systemd[1]: Finished Netfilter Tables.
Jan 21 11:00:27 np0005590810 python3.9[63247]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:00:28 np0005590810 python3.9[63400]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:29 np0005590810 python3.9[63525]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011228.494915-696-181822303808978/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:30 np0005590810 python3.9[63678]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:00:30 np0005590810 systemd[1]: Reloading OpenSSH server daemon...
Jan 21 11:00:30 np0005590810 systemd[1]: Reloaded OpenSSH server daemon.
Jan 21 11:00:31 np0005590810 python3.9[63834]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:32 np0005590810 python3.9[63986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:32 np0005590810 python3.9[64109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011231.6635327-789-20797041980445/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:33 np0005590810 python3.9[64261]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 11:00:33 np0005590810 systemd[1]: Starting Time & Date Service...
Jan 21 11:00:33 np0005590810 systemd[1]: Started Time & Date Service.
Jan 21 11:00:35 np0005590810 python3.9[64417]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:36 np0005590810 python3.9[64569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:36 np0005590810 python3.9[64692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011235.6469796-894-275477428163879/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:37 np0005590810 python3.9[64844]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:37 np0005590810 python3.9[64967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011236.974219-939-38444349674777/.source.yaml _original_basename=.wp373w3f follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:38 np0005590810 python3.9[65119]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:39 np0005590810 python3.9[65242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011238.2644083-984-202342017077842/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:40 np0005590810 python3.9[65394]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:00:40 np0005590810 python3.9[65547]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:00:41 np0005590810 python3[65700]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 11:00:42 np0005590810 python3.9[65852]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:43 np0005590810 python3.9[65975]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011242.0065722-1101-121288841770522/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:43 np0005590810 python3.9[66127]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:44 np0005590810 python3.9[66250]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011243.3849907-1146-242059100939072/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:45 np0005590810 python3.9[66402]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:45 np0005590810 python3.9[66525]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011244.738519-1191-225055628760592/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:46 np0005590810 python3.9[66677]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:47 np0005590810 python3.9[66800]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011246.0655127-1236-103226147230805/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:47 np0005590810 python3.9[66952]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:00:48 np0005590810 python3.9[67075]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011247.434606-1281-156022022111943/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:49 np0005590810 python3.9[67227]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:49 np0005590810 python3.9[67379]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:00:50 np0005590810 python3.9[67538]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:51 np0005590810 python3.9[67691]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:52 np0005590810 python3.9[67843]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:00:53 np0005590810 python3.9[67995]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 11:00:54 np0005590810 python3.9[68148]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 11:00:54 np0005590810 systemd[1]: session-14.scope: Deactivated successfully.
Jan 21 11:00:54 np0005590810 systemd[1]: session-14.scope: Consumed 33.301s CPU time.
Jan 21 11:00:54 np0005590810 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Jan 21 11:00:54 np0005590810 systemd-logind[795]: Removed session 14.
Jan 21 11:01:00 np0005590810 systemd-logind[795]: New session 15 of user zuul.
Jan 21 11:01:00 np0005590810 systemd[1]: Started Session 15 of User zuul.
Jan 21 11:01:01 np0005590810 python3.9[68329]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 21 11:01:02 np0005590810 python3.9[68496]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:01:03 np0005590810 python3.9[68648]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:01:04 np0005590810 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 11:01:04 np0005590810 python3.9[68802]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv0OBA2N1TAhxCdzXzrKN5DlNfOrnT3Wi6nfzeJkABVqUJeFd/SfTq8jSsLQ0pSkaOtVz+7W4LH88S0z3Nr1QfpfW4gHrJ1pT3O8Biq3Mgx7hUrKnL2cT1yKiD5Iq6T8UfNKNevEDbj0NQ+Jic0LJcUkOXatyclTAfvo8YENhy8hYnpUwaok5oAr7uw5HG4RZIj8PBGPWkSEdi4tKcGFXULERSm/K1rqhn5MOIzE3Dmvbnz3tBIzr8tAYdgXau4u4WTSBksysxWmVSk2eyhM/lvvd5TcaDGxH83eA2teAos9JkHzlxc2CXEBlGAuUlCbkJ69epl2vk9TKE87AhQhX7HGGImZ5toC6v4HVxWg95OMnE58pagea+0piEMIIxqZqMeWO6MNTXSbMTnhLPWiVUaA55u3OXGCg01yx9SLoy/bf/qXSsNv+3CZzjM2pn1JDpa2ZWcdZZ1WEmzj7z7uIIOR2M29jmSLqDYojaCwQrQ2X4H0RZ/PUgBDanAtgsAVs=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKmF8H6cVWPWJTBmu5sIvEQ1SEBiVtyh3cbexmKkjI7Z#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJOj6LbdCoXBm80G93arkMtQif0yRoMDDmGu5j1rGV2FPgXCY5k6WoAAG4AGJ49Uf/s3xvYGbnl4/h56B9Fe044=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3Iqo/i3IcHIykkN8D4X+kGokBMPdD6So6vktK/kuArfPGY9DRKQDPQTng9i8cEzg3k9G/Fw8NYCfkLPwWq3mT2vsX5CI5kacYxnTZd2e3uEbwEqIofkP+X4jxc3idj4xz6NIROh0h4ZELPLZoNr/Gws7+ZWVTlBRYYoQegDQvNVvIgQoFQg7TEFLBQ3+foQenlf/CoWRvdznwVr8Yd4lVM5MA+47Yv0lr0HoFVydahQUDb81O3hGXuTxmaYYUuwURQf6gJgalzxytF9nPuT8yx4aVsE7EHYLyMcXMioRAIyo2Ucl7tItO2I8R+NdTwwdfqBykheE/tcj3RH9CkvrNmUW4M6ttnSPBSvymxteLfANWFBDmNUp1POj/BLvHCfI3HK+tXVQQqxbTdf7jA4Y4+1Z8mXxGMsnBm+hvLZX383qQKXk86tH2o4a68WPC01j/5yXrNoutppw/5coIiBasAgYj+UDAK5Vcroyb1adwZ9NPaZ7kuhdomj1ExvosmM0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII2WIswHvg9V6rPDqJn3Fes0nz60HX3SPtnVmRIM+62w#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJtQAX9qYDZG4bYi2g9Sd+kC8/wUgucn/wABzN43Z14vseyme19Ye6/KW5wcv9xwMfGcTmL0sRtXjENcBHkixw4=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwcS0MSlu+GKAz+lDpBna25Kps4X5YW4KrOmLpWp+emFCG8fzlXBV+TxxMmBmtUiTsJO1/NaTLWNuadxcYslky2cThrxY1qAQADYCp9yLRn2OhM5+22XBsp9bNROL17hs+l5RddUQL2b1t9m0a/oRUocMv4Wy4ukc+dooKfqPSJK6VDl3MiUf8VqaJnoY5uAV84Qv5+Ku5emapmZ9va5WF+rLFumdEVTcdhhLwHxcl88xD1hNBWlfo7Bth/6ouVMa3EHFOJF8MM01l+MdGT9lGFulJnsq9xWIC0TrpuquuZGDhtL7FLcUa/UUhRjl3FIKhpIp6jHE1/qzBaIRPFR4va55U5rvOPkml/Oy9GFoHKL+o6KaAGzsQoLx4974jP8qMrCWhi6eSq6XY/cIxiNtvrdnxKrlDkT+Nh6RxYrATeUj8PpbABYgKHhPxJEh7BfNxLqqCNXW0MXw9rRxDnRqv2dhC5xPF08V5B5mmC7+gLeSqCaZrI16j8cj35LLe/5c=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN2892RP3rwefuRtkEcf8F9bZmp8LNkkHHtcAEke5aUU#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBItJYQ6JQLmwVGkkei84vuzYFf7il2vni7w9cIAKRYoy2WzAfVMVgO3nCoqO8E/cBJeFrGYRv6JSsIas6GFr9Pc=#012 create=True mode=0644 path=/tmp/ansible.o23dl88m state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:01:05 np0005590810 python3.9[68954]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.o23dl88m' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:01:06 np0005590810 python3.9[69108]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.o23dl88m state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:01:07 np0005590810 systemd[1]: session-15.scope: Deactivated successfully.
Jan 21 11:01:07 np0005590810 systemd[1]: session-15.scope: Consumed 3.181s CPU time.
Jan 21 11:01:07 np0005590810 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Jan 21 11:01:07 np0005590810 systemd-logind[795]: Removed session 15.
Jan 21 11:01:13 np0005590810 systemd-logind[795]: New session 16 of user zuul.
Jan 21 11:01:13 np0005590810 systemd[1]: Started Session 16 of User zuul.
Jan 21 11:01:14 np0005590810 python3.9[69287]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:01:15 np0005590810 python3.9[69443]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 11:01:16 np0005590810 python3.9[69597]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:01:17 np0005590810 python3.9[69750]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:01:18 np0005590810 python3.9[69903]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:01:18 np0005590810 python3.9[70057]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:01:19 np0005590810 python3.9[70212]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:01:20 np0005590810 systemd[1]: session-16.scope: Deactivated successfully.
Jan 21 11:01:20 np0005590810 systemd[1]: session-16.scope: Consumed 4.418s CPU time.
Jan 21 11:01:20 np0005590810 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Jan 21 11:01:20 np0005590810 systemd-logind[795]: Removed session 16.
Jan 21 11:01:27 np0005590810 systemd-logind[795]: New session 17 of user zuul.
Jan 21 11:01:27 np0005590810 systemd[1]: Started Session 17 of User zuul.
Jan 21 11:01:28 np0005590810 python3.9[70390]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:01:29 np0005590810 python3.9[70546]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:01:30 np0005590810 python3.9[70630]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 11:01:33 np0005590810 python3.9[70781]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:01:34 np0005590810 python3.9[70932]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 11:01:35 np0005590810 python3.9[71082]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:01:35 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:01:35 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:01:36 np0005590810 python3.9[71233]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:01:37 np0005590810 systemd[1]: session-17.scope: Deactivated successfully.
Jan 21 11:01:37 np0005590810 systemd[1]: session-17.scope: Consumed 5.506s CPU time.
Jan 21 11:01:37 np0005590810 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Jan 21 11:01:37 np0005590810 systemd-logind[795]: Removed session 17.
Jan 21 11:01:46 np0005590810 systemd-logind[795]: New session 18 of user zuul.
Jan 21 11:01:46 np0005590810 systemd[1]: Started Session 18 of User zuul.
Jan 21 11:01:52 np0005590810 python3[71999]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:01:54 np0005590810 python3[72094]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 11:01:56 np0005590810 python3[72121]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 11:01:56 np0005590810 python3[72147]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:01:56 np0005590810 kernel: loop: module loaded
Jan 21 11:01:57 np0005590810 kernel: loop3: detected capacity change from 0 to 41943040
Jan 21 11:01:57 np0005590810 python3[72183]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:01:57 np0005590810 lvm[72186]: PV /dev/loop3 not used.
Jan 21 11:01:58 np0005590810 lvm[72195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:01:58 np0005590810 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 21 11:01:58 np0005590810 lvm[72197]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 21 11:01:58 np0005590810 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 21 11:01:58 np0005590810 python3[72275]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 11:01:58 np0005590810 python3[72348]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769011318.2503252-36990-97236464933841/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:01:59 np0005590810 irqbalance[784]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 21 11:01:59 np0005590810 irqbalance[784]: IRQ 28 affinity is now unmanaged
Jan 21 11:01:59 np0005590810 python3[72398]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:01:59 np0005590810 systemd[1]: Reloading.
Jan 21 11:01:59 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:01:59 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:01:59 np0005590810 systemd[1]: Starting Ceph OSD losetup...
Jan 21 11:01:59 np0005590810 bash[72439]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 21 11:02:00 np0005590810 systemd[1]: Finished Ceph OSD losetup.
Jan 21 11:02:00 np0005590810 lvm[72441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:02:00 np0005590810 lvm[72441]: VG ceph_vg0 finished
Jan 21 11:02:02 np0005590810 python3[72465]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:02:04 np0005590810 python3[72558]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 11:02:08 np0005590810 python3[72615]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 11:02:12 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 11:02:12 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 11:02:14 np0005590810 python3[72729]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 11:02:14 np0005590810 python3[72757]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:02:14 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 11:02:14 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 11:02:14 np0005590810 systemd[1]: run-r5d60aa5a0d0c4e8cabae80f2e985bad5.service: Deactivated successfully.
Jan 21 11:02:14 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:14 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:15 np0005590810 python3[72822]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:02:15 np0005590810 python3[72848]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:02:15 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:16 np0005590810 python3[72927]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 11:02:16 np0005590810 python3[73000]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769011336.3404822-37182-620426041401/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:02:17 np0005590810 python3[73102]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 11:02:18 np0005590810 python3[73175]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769011337.3968978-37200-238880475932434/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:02:18 np0005590810 python3[73225]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 11:02:18 np0005590810 python3[73253]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 11:02:19 np0005590810 python3[73281]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 11:02:19 np0005590810 python3[73307]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 11:02:19 np0005590810 python3[73333]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid d9745984-fea8-5195-8ec5-61f685b5c785 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:02:20 np0005590810 systemd-logind[795]: New session 19 of user ceph-admin.
Jan 21 11:02:20 np0005590810 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 11:02:20 np0005590810 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 11:02:20 np0005590810 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 11:02:20 np0005590810 systemd[1]: Starting User Manager for UID 42477...
Jan 21 11:02:20 np0005590810 systemd[73341]: Queued start job for default target Main User Target.
Jan 21 11:02:20 np0005590810 systemd[73341]: Created slice User Application Slice.
Jan 21 11:02:20 np0005590810 systemd[73341]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 11:02:20 np0005590810 systemd[73341]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 11:02:20 np0005590810 systemd[73341]: Reached target Paths.
Jan 21 11:02:20 np0005590810 systemd[73341]: Reached target Timers.
Jan 21 11:02:20 np0005590810 systemd[73341]: Starting D-Bus User Message Bus Socket...
Jan 21 11:02:20 np0005590810 systemd[73341]: Starting Create User's Volatile Files and Directories...
Jan 21 11:02:20 np0005590810 systemd[73341]: Finished Create User's Volatile Files and Directories.
Jan 21 11:02:20 np0005590810 systemd[73341]: Listening on D-Bus User Message Bus Socket.
Jan 21 11:02:20 np0005590810 systemd[73341]: Reached target Sockets.
Jan 21 11:02:20 np0005590810 systemd[73341]: Reached target Basic System.
Jan 21 11:02:20 np0005590810 systemd[73341]: Reached target Main User Target.
Jan 21 11:02:20 np0005590810 systemd[73341]: Startup finished in 109ms.
Jan 21 11:02:20 np0005590810 systemd[1]: Started User Manager for UID 42477.
Jan 21 11:02:20 np0005590810 systemd[1]: Started Session 19 of User ceph-admin.
Jan 21 11:02:20 np0005590810 systemd[1]: session-19.scope: Deactivated successfully.
Jan 21 11:02:20 np0005590810 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Jan 21 11:02:20 np0005590810 systemd-logind[795]: Removed session 19.
Jan 21 11:02:20 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:20 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:22 np0005590810 systemd[1]: var-lib-containers-storage-overlay-compat1646045384-lower\x2dmapped.mount: Deactivated successfully.
Jan 21 11:02:30 np0005590810 systemd[1]: Stopping User Manager for UID 42477...
Jan 21 11:02:30 np0005590810 systemd[73341]: Activating special unit Exit the Session...
Jan 21 11:02:30 np0005590810 systemd[73341]: Stopped target Main User Target.
Jan 21 11:02:30 np0005590810 systemd[73341]: Stopped target Basic System.
Jan 21 11:02:30 np0005590810 systemd[73341]: Stopped target Paths.
Jan 21 11:02:30 np0005590810 systemd[73341]: Stopped target Sockets.
Jan 21 11:02:30 np0005590810 systemd[73341]: Stopped target Timers.
Jan 21 11:02:30 np0005590810 systemd[73341]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 21 11:02:30 np0005590810 systemd[73341]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 11:02:30 np0005590810 systemd[73341]: Closed D-Bus User Message Bus Socket.
Jan 21 11:02:30 np0005590810 systemd[73341]: Stopped Create User's Volatile Files and Directories.
Jan 21 11:02:30 np0005590810 systemd[73341]: Removed slice User Application Slice.
Jan 21 11:02:30 np0005590810 systemd[73341]: Reached target Shutdown.
Jan 21 11:02:30 np0005590810 systemd[73341]: Finished Exit the Session.
Jan 21 11:02:30 np0005590810 systemd[73341]: Reached target Exit the Session.
Jan 21 11:02:30 np0005590810 systemd[1]: user@42477.service: Deactivated successfully.
Jan 21 11:02:30 np0005590810 systemd[1]: Stopped User Manager for UID 42477.
Jan 21 11:02:30 np0005590810 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 21 11:02:30 np0005590810 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 21 11:02:30 np0005590810 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 21 11:02:30 np0005590810 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 21 11:02:30 np0005590810 systemd[1]: Removed slice User Slice of UID 42477.
Jan 21 11:02:42 np0005590810 podman[73434]: 2026-01-21 16:02:42.050036603 +0000 UTC m=+21.462464900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73506]: 2026-01-21 16:02:42.113883395 +0000 UTC m=+0.042691896 container create cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a (image=quay.io/ceph/ceph:v19, name=agitated_thompson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:02:42 np0005590810 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 21 11:02:42 np0005590810 systemd[1]: Started libpod-conmon-cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a.scope.
Jan 21 11:02:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:42 np0005590810 podman[73506]: 2026-01-21 16:02:42.093190852 +0000 UTC m=+0.021999383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:42 np0005590810 podman[73506]: 2026-01-21 16:02:42.195187297 +0000 UTC m=+0.123995818 container init cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a (image=quay.io/ceph/ceph:v19, name=agitated_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 21 11:02:42 np0005590810 podman[73506]: 2026-01-21 16:02:42.201191602 +0000 UTC m=+0.130000103 container start cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a (image=quay.io/ceph/ceph:v19, name=agitated_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:02:42 np0005590810 podman[73506]: 2026-01-21 16:02:42.205469566 +0000 UTC m=+0.134278087 container attach cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a (image=quay.io/ceph/ceph:v19, name=agitated_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:02:42 np0005590810 agitated_thompson[73522]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 21 11:02:42 np0005590810 systemd[1]: libpod-cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a.scope: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73506]: 2026-01-21 16:02:42.298017327 +0000 UTC m=+0.226825828 container died cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a (image=quay.io/ceph/ceph:v19, name=agitated_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:02:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b508c92f9c9f8061f95286d9525eabd4d2104dfc33b11d939bfd1026d5eb0241-merged.mount: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73506]: 2026-01-21 16:02:42.334050625 +0000 UTC m=+0.262859126 container remove cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a (image=quay.io/ceph/ceph:v19, name=agitated_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:02:42 np0005590810 systemd[1]: libpod-conmon-cdc482da2b900e4cb4c5cff67e075954ecff964194b871deefde6ce74a20303a.scope: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73538]: 2026-01-21 16:02:42.394212001 +0000 UTC m=+0.039273849 container create a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239 (image=quay.io/ceph/ceph:v19, name=jovial_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:02:42 np0005590810 systemd[1]: Started libpod-conmon-a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239.scope.
Jan 21 11:02:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:42 np0005590810 podman[73538]: 2026-01-21 16:02:42.454188782 +0000 UTC m=+0.099250630 container init a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239 (image=quay.io/ceph/ceph:v19, name=jovial_shaw, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 11:02:42 np0005590810 podman[73538]: 2026-01-21 16:02:42.459803676 +0000 UTC m=+0.104865524 container start a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239 (image=quay.io/ceph/ceph:v19, name=jovial_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:02:42 np0005590810 podman[73538]: 2026-01-21 16:02:42.463614495 +0000 UTC m=+0.108676363 container attach a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239 (image=quay.io/ceph/ceph:v19, name=jovial_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:02:42 np0005590810 jovial_shaw[73554]: 167 167
Jan 21 11:02:42 np0005590810 systemd[1]: libpod-a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239.scope: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73538]: 2026-01-21 16:02:42.465681309 +0000 UTC m=+0.110743157 container died a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239 (image=quay.io/ceph/ceph:v19, name=jovial_shaw, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 21 11:02:42 np0005590810 podman[73538]: 2026-01-21 16:02:42.376378448 +0000 UTC m=+0.021440326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:42 np0005590810 podman[73538]: 2026-01-21 16:02:42.49666901 +0000 UTC m=+0.141730848 container remove a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239 (image=quay.io/ceph/ceph:v19, name=jovial_shaw, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:02:42 np0005590810 systemd[1]: libpod-conmon-a32cc61ecced6e771a776db2b7e8c2a07e5677cc0bf13d94040ab0e500af7239.scope: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73571]: 2026-01-21 16:02:42.558662864 +0000 UTC m=+0.040201839 container create 1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10 (image=quay.io/ceph/ceph:v19, name=jovial_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:02:42 np0005590810 systemd[1]: Started libpod-conmon-1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10.scope.
Jan 21 11:02:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:42 np0005590810 podman[73571]: 2026-01-21 16:02:42.62364711 +0000 UTC m=+0.105186105 container init 1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10 (image=quay.io/ceph/ceph:v19, name=jovial_lewin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 11:02:42 np0005590810 podman[73571]: 2026-01-21 16:02:42.63011202 +0000 UTC m=+0.111650995 container start 1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10 (image=quay.io/ceph/ceph:v19, name=jovial_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:02:42 np0005590810 podman[73571]: 2026-01-21 16:02:42.63364529 +0000 UTC m=+0.115184285 container attach 1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10 (image=quay.io/ceph/ceph:v19, name=jovial_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:02:42 np0005590810 podman[73571]: 2026-01-21 16:02:42.541855283 +0000 UTC m=+0.023394278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:42 np0005590810 jovial_lewin[73587]: AQCi+HBplj30JhAAubYG0fNrj/U1+YtRc30ubw==
Jan 21 11:02:42 np0005590810 systemd[1]: libpod-1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10.scope: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73571]: 2026-01-21 16:02:42.657969004 +0000 UTC m=+0.139507979 container died 1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10 (image=quay.io/ceph/ceph:v19, name=jovial_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:02:42 np0005590810 podman[73571]: 2026-01-21 16:02:42.689737581 +0000 UTC m=+0.171276556 container remove 1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10 (image=quay.io/ceph/ceph:v19, name=jovial_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Jan 21 11:02:42 np0005590810 systemd[1]: libpod-conmon-1f2c5017ee91e9e920407b9ae40c38fc1d862813292d062d7d5d7ec6c1baaf10.scope: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73607]: 2026-01-21 16:02:42.750126894 +0000 UTC m=+0.039316351 container create 3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8 (image=quay.io/ceph/ceph:v19, name=laughing_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:02:42 np0005590810 systemd[1]: Started libpod-conmon-3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8.scope.
Jan 21 11:02:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:42 np0005590810 podman[73607]: 2026-01-21 16:02:42.805820582 +0000 UTC m=+0.095010069 container init 3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8 (image=quay.io/ceph/ceph:v19, name=laughing_euclid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 21 11:02:42 np0005590810 podman[73607]: 2026-01-21 16:02:42.813333235 +0000 UTC m=+0.102522702 container start 3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8 (image=quay.io/ceph/ceph:v19, name=laughing_euclid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:02:42 np0005590810 podman[73607]: 2026-01-21 16:02:42.817563416 +0000 UTC m=+0.106752903 container attach 3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8 (image=quay.io/ceph/ceph:v19, name=laughing_euclid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:02:42 np0005590810 podman[73607]: 2026-01-21 16:02:42.7325936 +0000 UTC m=+0.021783087 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:42 np0005590810 laughing_euclid[73624]: AQCi+HBpfZKEMRAAitcQyTeNJvtayE0TsrSzqA==
Jan 21 11:02:42 np0005590810 systemd[1]: libpod-3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8.scope: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73607]: 2026-01-21 16:02:42.833847191 +0000 UTC m=+0.123036648 container died 3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8 (image=quay.io/ceph/ceph:v19, name=laughing_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:02:42 np0005590810 podman[73607]: 2026-01-21 16:02:42.872435939 +0000 UTC m=+0.161625396 container remove 3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8 (image=quay.io/ceph/ceph:v19, name=laughing_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:02:42 np0005590810 systemd[1]: libpod-conmon-3d30822b30ed6d729b5c5c877a873d82158be949edb5ee0158b26accab6286b8.scope: Deactivated successfully.
Jan 21 11:02:42 np0005590810 podman[73643]: 2026-01-21 16:02:42.933932227 +0000 UTC m=+0.041263252 container create d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217 (image=quay.io/ceph/ceph:v19, name=heuristic_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 21 11:02:42 np0005590810 systemd[1]: Started libpod-conmon-d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217.scope.
Jan 21 11:02:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:43 np0005590810 podman[73643]: 2026-01-21 16:02:42.917150856 +0000 UTC m=+0.024481901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:43 np0005590810 podman[73643]: 2026-01-21 16:02:43.686527017 +0000 UTC m=+0.793858052 container init d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217 (image=quay.io/ceph/ceph:v19, name=heuristic_dijkstra, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:02:43 np0005590810 podman[73643]: 2026-01-21 16:02:43.691991326 +0000 UTC m=+0.799322351 container start d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217 (image=quay.io/ceph/ceph:v19, name=heuristic_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:02:43 np0005590810 heuristic_dijkstra[73659]: AQCj+HBp1ouzKhAAEGaHPLFVuuaN+H0xzZ/G5A==
Jan 21 11:02:43 np0005590810 systemd[1]: libpod-d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217.scope: Deactivated successfully.
Jan 21 11:02:45 np0005590810 podman[73643]: 2026-01-21 16:02:45.605889637 +0000 UTC m=+2.713220672 container attach d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217 (image=quay.io/ceph/ceph:v19, name=heuristic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:02:45 np0005590810 podman[73643]: 2026-01-21 16:02:45.60662267 +0000 UTC m=+2.713953695 container died d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217 (image=quay.io/ceph/ceph:v19, name=heuristic_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:02:45 np0005590810 systemd[1]: var-lib-containers-storage-overlay-60b57f345b138e632d59c5398bffec55506bbf2e735ef4a87460b71e346184ac-merged.mount: Deactivated successfully.
Jan 21 11:02:45 np0005590810 podman[73643]: 2026-01-21 16:02:45.98089975 +0000 UTC m=+3.088230775 container remove d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217 (image=quay.io/ceph/ceph:v19, name=heuristic_dijkstra, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:02:45 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:46 np0005590810 podman[73681]: 2026-01-21 16:02:46.037829698 +0000 UTC m=+0.028892538 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:46 np0005590810 podman[73681]: 2026-01-21 16:02:46.264394617 +0000 UTC m=+0.255457447 container create 69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147 (image=quay.io/ceph/ceph:v19, name=nervous_yalow, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:02:46 np0005590810 systemd[1]: libpod-conmon-d29224043d3aaa48a022527bcc1045cacb4aba26770f6e5c330071a4cf40c217.scope: Deactivated successfully.
Jan 21 11:02:46 np0005590810 systemd[1]: Started libpod-conmon-69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147.scope.
Jan 21 11:02:46 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c10f889ed1e2c291a2ef07e8d6cf9c3daf544d6700f2ad11aaae7ed2593cc6e/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:46 np0005590810 podman[73681]: 2026-01-21 16:02:46.326078651 +0000 UTC m=+0.317141501 container init 69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147 (image=quay.io/ceph/ceph:v19, name=nervous_yalow, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:02:46 np0005590810 podman[73681]: 2026-01-21 16:02:46.331437197 +0000 UTC m=+0.322500027 container start 69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147 (image=quay.io/ceph/ceph:v19, name=nervous_yalow, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:02:46 np0005590810 podman[73681]: 2026-01-21 16:02:46.33537226 +0000 UTC m=+0.326435090 container attach 69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147 (image=quay.io/ceph/ceph:v19, name=nervous_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:02:46 np0005590810 nervous_yalow[73697]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 21 11:02:46 np0005590810 nervous_yalow[73697]: setting min_mon_release = quincy
Jan 21 11:02:46 np0005590810 nervous_yalow[73697]: /usr/bin/monmaptool: set fsid to d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:46 np0005590810 nervous_yalow[73697]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 21 11:02:46 np0005590810 systemd[1]: libpod-69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147.scope: Deactivated successfully.
Jan 21 11:02:46 np0005590810 podman[73681]: 2026-01-21 16:02:46.35988971 +0000 UTC m=+0.350952540 container died 69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147 (image=quay.io/ceph/ceph:v19, name=nervous_yalow, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 21 11:02:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay-0c10f889ed1e2c291a2ef07e8d6cf9c3daf544d6700f2ad11aaae7ed2593cc6e-merged.mount: Deactivated successfully.
Jan 21 11:02:46 np0005590810 podman[73681]: 2026-01-21 16:02:46.395572487 +0000 UTC m=+0.386635317 container remove 69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147 (image=quay.io/ceph/ceph:v19, name=nervous_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:02:46 np0005590810 systemd[1]: libpod-conmon-69243689ed75f87c2fc2719e965947cafa2c515d057f06c2cdbcbb8e2afa7147.scope: Deactivated successfully.
Jan 21 11:02:46 np0005590810 podman[73716]: 2026-01-21 16:02:46.452611327 +0000 UTC m=+0.038507456 container create cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc (image=quay.io/ceph/ceph:v19, name=compassionate_colden, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:02:46 np0005590810 systemd[1]: Started libpod-conmon-cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc.scope.
Jan 21 11:02:46 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1892c88bbfc02ab986f9f61c01a163631c4c7964fb2cb067e52363f0794a058/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1892c88bbfc02ab986f9f61c01a163631c4c7964fb2cb067e52363f0794a058/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1892c88bbfc02ab986f9f61c01a163631c4c7964fb2cb067e52363f0794a058/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1892c88bbfc02ab986f9f61c01a163631c4c7964fb2cb067e52363f0794a058/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:46 np0005590810 podman[73716]: 2026-01-21 16:02:46.509543303 +0000 UTC m=+0.095439442 container init cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc (image=quay.io/ceph/ceph:v19, name=compassionate_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:02:46 np0005590810 podman[73716]: 2026-01-21 16:02:46.515809677 +0000 UTC m=+0.101705806 container start cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc (image=quay.io/ceph/ceph:v19, name=compassionate_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 11:02:46 np0005590810 podman[73716]: 2026-01-21 16:02:46.520342128 +0000 UTC m=+0.106238257 container attach cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc (image=quay.io/ceph/ceph:v19, name=compassionate_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:02:46 np0005590810 podman[73716]: 2026-01-21 16:02:46.435568708 +0000 UTC m=+0.021464857 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:46 np0005590810 systemd[1]: libpod-cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc.scope: Deactivated successfully.
Jan 21 11:02:46 np0005590810 podman[73716]: 2026-01-21 16:02:46.601317141 +0000 UTC m=+0.187213290 container died cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc (image=quay.io/ceph/ceph:v19, name=compassionate_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:02:46 np0005590810 podman[73716]: 2026-01-21 16:02:46.637052879 +0000 UTC m=+0.222949008 container remove cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc (image=quay.io/ceph/ceph:v19, name=compassionate_colden, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:02:46 np0005590810 systemd[1]: libpod-conmon-cb4ca6ce0f29549dbada6f00833d7aee98503c714937b3b5ee49d764863fcabc.scope: Deactivated successfully.
Jan 21 11:02:46 np0005590810 systemd[1]: Reloading.
Jan 21 11:02:46 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:02:46 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:02:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:46 np0005590810 systemd[1]: Reloading.
Jan 21 11:02:46 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:02:46 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:02:47 np0005590810 systemd[1]: Reached target All Ceph clusters and services.
Jan 21 11:02:47 np0005590810 systemd[1]: Reloading.
Jan 21 11:02:47 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:02:47 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:02:47 np0005590810 systemd[1]: Reached target Ceph cluster d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:02:47 np0005590810 systemd[1]: Reloading.
Jan 21 11:02:47 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:02:47 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:02:47 np0005590810 systemd[1]: Reloading.
Jan 21 11:02:47 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:02:47 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:02:47 np0005590810 systemd[1]: Created slice Slice /system/ceph-d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:02:47 np0005590810 systemd[1]: Reached target System Time Set.
Jan 21 11:02:47 np0005590810 systemd[1]: Reached target System Time Synchronized.
Jan 21 11:02:47 np0005590810 systemd[1]: Starting Ceph mon.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:02:48 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:48 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:48 np0005590810 podman[74008]: 2026-01-21 16:02:48.158306588 +0000 UTC m=+0.038628091 container create a477888f3f24606b2a14fe6f45da3414724486d3f8eb75775a41f8a6e29d4ca5 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 11:02:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aec051a8e3e8a1e827c37dafe88c540da9df2a8bb43c17d9444fb7df55162c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aec051a8e3e8a1e827c37dafe88c540da9df2a8bb43c17d9444fb7df55162c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aec051a8e3e8a1e827c37dafe88c540da9df2a8bb43c17d9444fb7df55162c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aec051a8e3e8a1e827c37dafe88c540da9df2a8bb43c17d9444fb7df55162c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:48 np0005590810 podman[74008]: 2026-01-21 16:02:48.220391443 +0000 UTC m=+0.100712966 container init a477888f3f24606b2a14fe6f45da3414724486d3f8eb75775a41f8a6e29d4ca5 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 11:02:48 np0005590810 podman[74008]: 2026-01-21 16:02:48.227082251 +0000 UTC m=+0.107403754 container start a477888f3f24606b2a14fe6f45da3414724486d3f8eb75775a41f8a6e29d4ca5 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:02:48 np0005590810 bash[74008]: a477888f3f24606b2a14fe6f45da3414724486d3f8eb75775a41f8a6e29d4ca5
Jan 21 11:02:48 np0005590810 podman[74008]: 2026-01-21 16:02:48.142023592 +0000 UTC m=+0.022345115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:48 np0005590810 systemd[1]: Started Ceph mon.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: pidfile_write: ignore empty --pid-file
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: load: jerasure load: lrc 
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: RocksDB version: 7.9.2
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Git sha 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: DB SUMMARY
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: DB Session ID:  161KI88YQE1MD37KEAKS
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: CURRENT file:  CURRENT
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                         Options.error_if_exists: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                       Options.create_if_missing: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                                     Options.env: 0x559970d96c20
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                                Options.info_log: 0x559972eced60
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                              Options.statistics: (nil)
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                               Options.use_fsync: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                              Options.db_log_dir: 
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                                 Options.wal_dir: 
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                    Options.write_buffer_manager: 0x559972ed3900
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.unordered_write: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                               Options.row_cache: None
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                              Options.wal_filter: None
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.two_write_queues: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.wal_compression: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.atomic_flush: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.max_background_jobs: 2
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.max_background_compactions: -1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.max_subcompactions: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.max_total_wal_size: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                          Options.max_open_files: -1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:       Options.compaction_readahead_size: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Compression algorithms supported:
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: #011kZSTD supported: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: #011kXpressCompression supported: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: #011kZlibCompression supported: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:           Options.merge_operator: 
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:        Options.compaction_filter: None
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559972ece500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559972ef3350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:        Options.write_buffer_size: 33554432
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:  Options.max_write_buffer_number: 2
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:          Options.compression: NoCompression
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.num_levels: 7
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011368270604, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011368305861, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "161KI88YQE1MD37KEAKS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011368306034, "job": 1, "event": "recovery_finished"}
Jan 21 11:02:48 np0005590810 ceph-mon[74027]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 21 11:02:48 np0005590810 podman[74028]: 2026-01-21 16:02:48.287105234 +0000 UTC m=+0.025788001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:48 np0005590810 podman[74028]: 2026-01-21 16:02:48.638557208 +0000 UTC m=+0.377239955 container create 3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233 (image=quay.io/ceph/ceph:v19, name=sweet_blackwell, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559972ef4e00
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: rocksdb: DB pointer 0x559972ffe000
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.8 total, 0.8 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.04              0.00         1    0.035       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.04              0.00         1    0.035       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.04              0.00         1    0.035       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.04              0.00         1    0.035       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.8 total, 0.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559972ef3350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@-1(???) e0 preinit fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 21 11:02:49 np0005590810 systemd[1]: Started libpod-conmon-3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233.scope.
Jan 21 11:02:49 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:49 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2925f79aee620cadb47f48e691a59c4135f861d8e72f5d98f2832dd12f8946/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:49 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2925f79aee620cadb47f48e691a59c4135f861d8e72f5d98f2832dd12f8946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:49 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2925f79aee620cadb47f48e691a59c4135f861d8e72f5d98f2832dd12f8946/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:02:49 np0005590810 podman[74028]: 2026-01-21 16:02:49.488612712 +0000 UTC m=+1.227295469 container init 3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233 (image=quay.io/ceph/ceph:v19, name=sweet_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 21 11:02:49 np0005590810 podman[74028]: 2026-01-21 16:02:49.497544509 +0000 UTC m=+1.236227256 container start 3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233 (image=quay.io/ceph/ceph:v19, name=sweet_blackwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 21 11:02:49 np0005590810 podman[74028]: 2026-01-21 16:02:49.687943246 +0000 UTC m=+1.426626023 container attach 3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233 (image=quay.io/ceph/ceph:v19, name=sweet_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T16:02:46.356140+0000
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : created 2026-01-21T16:02:46.356140+0000
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).mds e1 new map
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-21T16:02:49:724869+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mkfs d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 21 11:02:49 np0005590810 ceph-mon[74027]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691435945' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:  cluster:
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    id:     d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    health: HEALTH_OK
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]: 
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:  services:
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    mon: 1 daemons, quorum compute-0 (age 0.167506s)
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    mgr: no daemons active
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    osd: 0 osds: 0 up, 0 in
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]: 
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:  data:
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    pools:   0 pools, 0 pgs
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    objects: 0 objects, 0 B
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    usage:   0 B used, 0 B / 0 B avail
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]:    pgs:     
Jan 21 11:02:49 np0005590810 sweet_blackwell[74082]: 
Jan 21 11:02:49 np0005590810 systemd[1]: libpod-3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233.scope: Deactivated successfully.
Jan 21 11:02:49 np0005590810 podman[74028]: 2026-01-21 16:02:49.906187727 +0000 UTC m=+1.644870494 container died 3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233 (image=quay.io/ceph/ceph:v19, name=sweet_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:02:49 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8a2925f79aee620cadb47f48e691a59c4135f861d8e72f5d98f2832dd12f8946-merged.mount: Deactivated successfully.
Jan 21 11:02:49 np0005590810 podman[74028]: 2026-01-21 16:02:49.941256555 +0000 UTC m=+1.679939302 container remove 3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233 (image=quay.io/ceph/ceph:v19, name=sweet_blackwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:02:49 np0005590810 systemd[1]: libpod-conmon-3a9c7e7fee1cf1448fb2fb4ce85c6e447fe6427ac43f971cd63445f259cde233.scope: Deactivated successfully.
Jan 21 11:02:49 np0005590810 podman[74120]: 2026-01-21 16:02:49.998968286 +0000 UTC m=+0.035838883 container create 3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f (image=quay.io/ceph/ceph:v19, name=compassionate_easley, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:02:50 np0005590810 systemd[1]: Started libpod-conmon-3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f.scope.
Jan 21 11:02:50 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1ef1042f3f1787e24db14d19ae824c1fd53de46577d6c897eecb3089980577/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1ef1042f3f1787e24db14d19ae824c1fd53de46577d6c897eecb3089980577/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1ef1042f3f1787e24db14d19ae824c1fd53de46577d6c897eecb3089980577/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1ef1042f3f1787e24db14d19ae824c1fd53de46577d6c897eecb3089980577/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:50 np0005590810 podman[74120]: 2026-01-21 16:02:50.049262016 +0000 UTC m=+0.086132653 container init 3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f (image=quay.io/ceph/ceph:v19, name=compassionate_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:02:50 np0005590810 podman[74120]: 2026-01-21 16:02:50.05712423 +0000 UTC m=+0.093994827 container start 3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f (image=quay.io/ceph/ceph:v19, name=compassionate_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:02:50 np0005590810 podman[74120]: 2026-01-21 16:02:50.060214466 +0000 UTC m=+0.097085063 container attach 3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f (image=quay.io/ceph/ceph:v19, name=compassionate_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:02:50 np0005590810 podman[74120]: 2026-01-21 16:02:49.983699442 +0000 UTC m=+0.020570069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3416990184' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3416990184' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 11:02:50 np0005590810 compassionate_easley[74137]: 
Jan 21 11:02:50 np0005590810 compassionate_easley[74137]: [global]
Jan 21 11:02:50 np0005590810 compassionate_easley[74137]: #011fsid = d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:50 np0005590810 compassionate_easley[74137]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 21 11:02:50 np0005590810 systemd[1]: libpod-3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f.scope: Deactivated successfully.
Jan 21 11:02:50 np0005590810 podman[74120]: 2026-01-21 16:02:50.260920303 +0000 UTC m=+0.297790900 container died 3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f (image=quay.io/ceph/ceph:v19, name=compassionate_easley, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:02:50 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8d1ef1042f3f1787e24db14d19ae824c1fd53de46577d6c897eecb3089980577-merged.mount: Deactivated successfully.
Jan 21 11:02:50 np0005590810 podman[74120]: 2026-01-21 16:02:50.29759986 +0000 UTC m=+0.334470457 container remove 3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f (image=quay.io/ceph/ceph:v19, name=compassionate_easley, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:02:50 np0005590810 systemd[1]: libpod-conmon-3b2a366e07c9efe0daf09b7365cfc4c517ace9b8c9f74e0dd14a29797cb2605f.scope: Deactivated successfully.
Jan 21 11:02:50 np0005590810 podman[74173]: 2026-01-21 16:02:50.354510447 +0000 UTC m=+0.039048783 container create 8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020 (image=quay.io/ceph/ceph:v19, name=eager_leakey, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:02:50 np0005590810 systemd[1]: Started libpod-conmon-8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020.scope.
Jan 21 11:02:50 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0716d40bc4609d1d1b2b7db1b34add77ad012db7762d74b926101536a9c85d55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0716d40bc4609d1d1b2b7db1b34add77ad012db7762d74b926101536a9c85d55/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0716d40bc4609d1d1b2b7db1b34add77ad012db7762d74b926101536a9c85d55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0716d40bc4609d1d1b2b7db1b34add77ad012db7762d74b926101536a9c85d55/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:50 np0005590810 podman[74173]: 2026-01-21 16:02:50.40812652 +0000 UTC m=+0.092664856 container init 8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020 (image=quay.io/ceph/ceph:v19, name=eager_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:02:50 np0005590810 podman[74173]: 2026-01-21 16:02:50.415391925 +0000 UTC m=+0.099930261 container start 8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020 (image=quay.io/ceph/ceph:v19, name=eager_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 11:02:50 np0005590810 podman[74173]: 2026-01-21 16:02:50.420854295 +0000 UTC m=+0.105392661 container attach 8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020 (image=quay.io/ceph/ceph:v19, name=eager_leakey, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:02:50 np0005590810 podman[74173]: 2026-01-21 16:02:50.335733374 +0000 UTC m=+0.020271730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3530303780' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:02:50 np0005590810 systemd[1]: libpod-8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020.scope: Deactivated successfully.
Jan 21 11:02:50 np0005590810 conmon[74189]: conmon 8d21ff94b361ebf50f77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020.scope/container/memory.events
Jan 21 11:02:50 np0005590810 podman[74173]: 2026-01-21 16:02:50.606559877 +0000 UTC m=+0.291098213 container died 8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020 (image=quay.io/ceph/ceph:v19, name=eager_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:02:50 np0005590810 systemd[1]: var-lib-containers-storage-overlay-0716d40bc4609d1d1b2b7db1b34add77ad012db7762d74b926101536a9c85d55-merged.mount: Deactivated successfully.
Jan 21 11:02:50 np0005590810 podman[74173]: 2026-01-21 16:02:50.651529952 +0000 UTC m=+0.336068288 container remove 8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020 (image=quay.io/ceph/ceph:v19, name=eager_leakey, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:02:50 np0005590810 systemd[1]: libpod-conmon-8d21ff94b361ebf50f7741bc147a1fcad392d8b1b4d246ef6911f44de6096020.scope: Deactivated successfully.
Jan 21 11:02:50 np0005590810 systemd[1]: Stopping Ceph mon.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: from='client.? 192.168.122.100:0/3416990184' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: from='client.? 192.168.122.100:0/3416990184' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: mon.compute-0@0(leader) e1 shutdown
Jan 21 11:02:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0[74023]: 2026-01-21T16:02:50.816+0000 7fa696cee640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 21 11:02:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0[74023]: 2026-01-21T16:02:50.816+0000 7fa696cee640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 11:02:50 np0005590810 ceph-mon[74027]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 11:02:50 np0005590810 podman[74258]: 2026-01-21 16:02:50.951323773 +0000 UTC m=+0.164177814 container died a477888f3f24606b2a14fe6f45da3414724486d3f8eb75775a41f8a6e29d4ca5 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 11:02:50 np0005590810 systemd[1]: var-lib-containers-storage-overlay-20aec051a8e3e8a1e827c37dafe88c540da9df2a8bb43c17d9444fb7df55162c-merged.mount: Deactivated successfully.
Jan 21 11:02:50 np0005590810 podman[74258]: 2026-01-21 16:02:50.979439226 +0000 UTC m=+0.192293257 container remove a477888f3f24606b2a14fe6f45da3414724486d3f8eb75775a41f8a6e29d4ca5 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:02:50 np0005590810 bash[74258]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0
Jan 21 11:02:50 np0005590810 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 11:02:51 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@mon.compute-0.service: Deactivated successfully.
Jan 21 11:02:51 np0005590810 systemd[1]: Stopped Ceph mon.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:02:51 np0005590810 systemd[1]: Starting Ceph mon.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:02:51 np0005590810 podman[74361]: 2026-01-21 16:02:51.253429636 +0000 UTC m=+0.032639713 container create 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 11:02:51 np0005590810 podman[74361]: 2026-01-21 16:02:51.238803352 +0000 UTC m=+0.018013449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f330e38a8417b39b158aadf8edecc7dd46af203488eacfdda99b44086e04b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f330e38a8417b39b158aadf8edecc7dd46af203488eacfdda99b44086e04b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f330e38a8417b39b158aadf8edecc7dd46af203488eacfdda99b44086e04b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f330e38a8417b39b158aadf8edecc7dd46af203488eacfdda99b44086e04b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:51 np0005590810 podman[74361]: 2026-01-21 16:02:51.735676118 +0000 UTC m=+0.514886195 container init 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 21 11:02:51 np0005590810 podman[74361]: 2026-01-21 16:02:51.741655354 +0000 UTC m=+0.520865431 container start 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:02:51 np0005590810 bash[74361]: 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8
Jan 21 11:02:51 np0005590810 systemd[1]: Started Ceph mon.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: pidfile_write: ignore empty --pid-file
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: load: jerasure load: lrc 
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: RocksDB version: 7.9.2
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Git sha 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: DB SUMMARY
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: DB Session ID:  6KF744HPATS83NMB4LEU
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: CURRENT file:  CURRENT
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 59859 ; 
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                         Options.error_if_exists: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                       Options.create_if_missing: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                                     Options.env: 0x55e6f5fadc20
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                                Options.info_log: 0x55e6f770dac0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                              Options.statistics: (nil)
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                               Options.use_fsync: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                              Options.db_log_dir: 
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                                 Options.wal_dir: 
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                    Options.write_buffer_manager: 0x55e6f7711900
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.unordered_write: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                               Options.row_cache: None
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                              Options.wal_filter: None
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.two_write_queues: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.wal_compression: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.atomic_flush: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.max_background_jobs: 2
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.max_background_compactions: -1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.max_subcompactions: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.max_total_wal_size: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                          Options.max_open_files: -1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:       Options.compaction_readahead_size: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Compression algorithms supported:
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: #011kZSTD supported: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: #011kXpressCompression supported: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: #011kZlibCompression supported: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:           Options.merge_operator: 
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:        Options.compaction_filter: None
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e6f770caa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e6f7731350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:        Options.write_buffer_size: 33554432
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:  Options.max_write_buffer_number: 2
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:          Options.compression: NoCompression
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.num_levels: 7
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011371785290, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011371790348, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 58095, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3209, "raw_average_key_size": 30, "raw_value_size": 55578, "raw_average_value_size": 529, "num_data_blocks": 9, "num_entries": 105, "num_filter_entries": 105, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011371, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011371790503, "job": 1, "event": "recovery_finished"}
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e6f7732e00
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: DB pointer 0x55e6f783c000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.13 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.0      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   60.13 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.0      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.0      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.0      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.45 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.45 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e6f7731350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(???) e1 preinit fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(???).mds e1 new map
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-21T16:02:49:724869+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T16:02:46.356140+0000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : created 2026-01-21T16:02:46.356140+0000
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 21 11:02:51 np0005590810 podman[74381]: 2026-01-21 16:02:51.818051874 +0000 UTC m=+0.044322666 container create 79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d (image=quay.io/ceph/ceph:v19, name=interesting_hertz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 21 11:02:51 np0005590810 systemd[1]: Started libpod-conmon-79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d.scope.
Jan 21 11:02:51 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf905cfcbc6aacc3db1d15d17e345b88848276701db7bc25cbc516b3d8e19e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf905cfcbc6aacc3db1d15d17e345b88848276701db7bc25cbc516b3d8e19e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf905cfcbc6aacc3db1d15d17e345b88848276701db7bc25cbc516b3d8e19e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:51 np0005590810 ceph-mon[74380]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 11:02:51 np0005590810 podman[74381]: 2026-01-21 16:02:51.879966795 +0000 UTC m=+0.106237627 container init 79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d (image=quay.io/ceph/ceph:v19, name=interesting_hertz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 11:02:51 np0005590810 podman[74381]: 2026-01-21 16:02:51.885988362 +0000 UTC m=+0.112259144 container start 79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d (image=quay.io/ceph/ceph:v19, name=interesting_hertz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:02:51 np0005590810 podman[74381]: 2026-01-21 16:02:51.889090848 +0000 UTC m=+0.115361640 container attach 79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d (image=quay.io/ceph/ceph:v19, name=interesting_hertz, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:02:51 np0005590810 podman[74381]: 2026-01-21 16:02:51.800059046 +0000 UTC m=+0.026329858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 21 11:02:52 np0005590810 systemd[1]: libpod-79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d.scope: Deactivated successfully.
Jan 21 11:02:52 np0005590810 podman[74381]: 2026-01-21 16:02:52.094677587 +0000 UTC m=+0.320948389 container died 79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d (image=quay.io/ceph/ceph:v19, name=interesting_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:02:52 np0005590810 systemd[1]: var-lib-containers-storage-overlay-2cf905cfcbc6aacc3db1d15d17e345b88848276701db7bc25cbc516b3d8e19e1-merged.mount: Deactivated successfully.
Jan 21 11:02:52 np0005590810 podman[74381]: 2026-01-21 16:02:52.130572331 +0000 UTC m=+0.356843123 container remove 79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d (image=quay.io/ceph/ceph:v19, name=interesting_hertz, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:02:52 np0005590810 systemd[1]: libpod-conmon-79f415bb812a3f34376768d171be559bc6853d0813d3df5ef3266c1b83045e8d.scope: Deactivated successfully.
Jan 21 11:02:52 np0005590810 podman[74472]: 2026-01-21 16:02:52.192289895 +0000 UTC m=+0.044185592 container create 4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3 (image=quay.io/ceph/ceph:v19, name=pedantic_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:02:52 np0005590810 systemd[1]: Started libpod-conmon-4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3.scope.
Jan 21 11:02:52 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f4e133ecf72bc6b8071d681ac6ac2db02391e4535b447e0557a66363e1f910/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f4e133ecf72bc6b8071d681ac6ac2db02391e4535b447e0557a66363e1f910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f4e133ecf72bc6b8071d681ac6ac2db02391e4535b447e0557a66363e1f910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:52 np0005590810 podman[74472]: 2026-01-21 16:02:52.254868416 +0000 UTC m=+0.106764143 container init 4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3 (image=quay.io/ceph/ceph:v19, name=pedantic_noether, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:02:52 np0005590810 podman[74472]: 2026-01-21 16:02:52.259687887 +0000 UTC m=+0.111583594 container start 4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3 (image=quay.io/ceph/ceph:v19, name=pedantic_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 21 11:02:52 np0005590810 podman[74472]: 2026-01-21 16:02:52.262396501 +0000 UTC m=+0.114292238 container attach 4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3 (image=quay.io/ceph/ceph:v19, name=pedantic_noether, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:02:52 np0005590810 podman[74472]: 2026-01-21 16:02:52.17117568 +0000 UTC m=+0.023071417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 21 11:02:52 np0005590810 systemd[1]: libpod-4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3.scope: Deactivated successfully.
Jan 21 11:02:52 np0005590810 podman[74472]: 2026-01-21 16:02:52.466927246 +0000 UTC m=+0.318822943 container died 4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3 (image=quay.io/ceph/ceph:v19, name=pedantic_noether, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:02:52 np0005590810 systemd[1]: var-lib-containers-storage-overlay-e5f4e133ecf72bc6b8071d681ac6ac2db02391e4535b447e0557a66363e1f910-merged.mount: Deactivated successfully.
Jan 21 11:02:52 np0005590810 podman[74472]: 2026-01-21 16:02:52.500702203 +0000 UTC m=+0.352597910 container remove 4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3 (image=quay.io/ceph/ceph:v19, name=pedantic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:02:52 np0005590810 systemd[1]: libpod-conmon-4262d65f636c328db72bb2ca7befb2b65b855f085e1089b6db25fe440367f0e3.scope: Deactivated successfully.
Jan 21 11:02:52 np0005590810 systemd[1]: Reloading.
Jan 21 11:02:52 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:02:52 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:02:52 np0005590810 systemd[1]: Reloading.
Jan 21 11:02:52 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:02:52 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:02:53 np0005590810 systemd[1]: Starting Ceph mgr.compute-0.ygffhs for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:02:53 np0005590810 podman[74651]: 2026-01-21 16:02:53.222850029 +0000 UTC m=+0.038335481 container create 299628b491cdf1044d84009b95b33acd0cd4617e089af06185b8ca12d7e616fa (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:02:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2d6f359f65629147f390a231ba6bed86a65061617113df03729f2cb1c8ea28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2d6f359f65629147f390a231ba6bed86a65061617113df03729f2cb1c8ea28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2d6f359f65629147f390a231ba6bed86a65061617113df03729f2cb1c8ea28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2d6f359f65629147f390a231ba6bed86a65061617113df03729f2cb1c8ea28/merged/var/lib/ceph/mgr/ceph-compute-0.ygffhs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:53 np0005590810 podman[74651]: 2026-01-21 16:02:53.280112715 +0000 UTC m=+0.095598197 container init 299628b491cdf1044d84009b95b33acd0cd4617e089af06185b8ca12d7e616fa (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:02:53 np0005590810 podman[74651]: 2026-01-21 16:02:53.286162773 +0000 UTC m=+0.101648225 container start 299628b491cdf1044d84009b95b33acd0cd4617e089af06185b8ca12d7e616fa (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:02:53 np0005590810 bash[74651]: 299628b491cdf1044d84009b95b33acd0cd4617e089af06185b8ca12d7e616fa
Jan 21 11:02:53 np0005590810 podman[74651]: 2026-01-21 16:02:53.205296865 +0000 UTC m=+0.020782337 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:53 np0005590810 systemd[1]: Started Ceph mgr.compute-0.ygffhs for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:02:53 np0005590810 ceph-mgr[74671]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 11:02:53 np0005590810 ceph-mgr[74671]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 21 11:02:53 np0005590810 ceph-mgr[74671]: pidfile_write: ignore empty --pid-file
Jan 21 11:02:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'alerts'
Jan 21 11:02:53 np0005590810 podman[74672]: 2026-01-21 16:02:53.345185385 +0000 UTC m=+0.021657823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:02:53 np0005590810 ceph-mgr[74671]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:02:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'balancer'
Jan 21 11:02:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:53.461+0000 7f1e4309e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:02:53 np0005590810 ceph-mgr[74671]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:02:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'cephadm'
Jan 21 11:02:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:53.562+0000 7f1e4309e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:02:53 np0005590810 podman[74672]: 2026-01-21 16:02:53.60642031 +0000 UTC m=+0.282892728 container create 8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349 (image=quay.io/ceph/ceph:v19, name=gifted_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:02:53 np0005590810 systemd[1]: Started libpod-conmon-8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349.scope.
Jan 21 11:02:53 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:02:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b99a165ef096aa63c0fbdce38ef89103ce02f69805e5cbbb7129824d857969/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b99a165ef096aa63c0fbdce38ef89103ce02f69805e5cbbb7129824d857969/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b99a165ef096aa63c0fbdce38ef89103ce02f69805e5cbbb7129824d857969/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:02:54 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'crash'
Jan 21 11:02:54 np0005590810 podman[74672]: 2026-01-21 16:02:54.357797681 +0000 UTC m=+1.034270119 container init 8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349 (image=quay.io/ceph/ceph:v19, name=gifted_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:02:54 np0005590810 ceph-mgr[74671]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:02:54 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'dashboard'
Jan 21 11:02:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:54.360+0000 7f1e4309e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:02:54 np0005590810 podman[74672]: 2026-01-21 16:02:54.364770938 +0000 UTC m=+1.041243356 container start 8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349 (image=quay.io/ceph/ceph:v19, name=gifted_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:02:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 11:02:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/414512714' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]: 
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]: {
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "health": {
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "status": "HEALTH_OK",
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "checks": {},
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "mutes": []
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    },
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "election_epoch": 5,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "quorum": [
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        0
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    ],
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "quorum_names": [
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "compute-0"
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    ],
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "quorum_age": 2,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "monmap": {
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "epoch": 1,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "min_mon_release_name": "squid",
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_mons": 1
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    },
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "osdmap": {
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "epoch": 1,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_osds": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_up_osds": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "osd_up_since": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_in_osds": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "osd_in_since": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_remapped_pgs": 0
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    },
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "pgmap": {
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "pgs_by_state": [],
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_pgs": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_pools": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_objects": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "data_bytes": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "bytes_used": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "bytes_avail": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "bytes_total": 0
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    },
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "fsmap": {
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "epoch": 1,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "btime": "2026-01-21T16:02:49:724869+0000",
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "by_rank": [],
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "up:standby": 0
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    },
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "mgrmap": {
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "available": false,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "num_standbys": 0,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "modules": [
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:            "iostat",
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:            "nfs",
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:            "restful"
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        ],
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "services": {}
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    },
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "servicemap": {
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "epoch": 1,
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "modified": "2026-01-21T16:02:49.728816+0000",
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:        "services": {}
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    },
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]:    "progress_events": {}
Jan 21 11:02:54 np0005590810 gifted_kalam[74711]: }
Jan 21 11:02:54 np0005590810 systemd[1]: libpod-8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349.scope: Deactivated successfully.
Jan 21 11:02:54 np0005590810 podman[74672]: 2026-01-21 16:02:54.779453144 +0000 UTC m=+1.455925662 container attach 8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349 (image=quay.io/ceph/ceph:v19, name=gifted_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 21 11:02:54 np0005590810 podman[74672]: 2026-01-21 16:02:54.780645351 +0000 UTC m=+1.457117769 container died 8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349 (image=quay.io/ceph/ceph:v19, name=gifted_kalam, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 21 11:02:54 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'devicehealth'
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 11:02:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:55.008+0000 7f1e4309e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:02:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 11:02:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 11:02:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  from numpy import show_config as show_numpy_config
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:02:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:55.185+0000 7f1e4309e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'influx'
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'insights'
Jan 21 11:02:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:55.257+0000 7f1e4309e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'iostat'
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'k8sevents'
Jan 21 11:02:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:55.403+0000 7f1e4309e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'localpool'
Jan 21 11:02:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 11:02:56 np0005590810 systemd[1]: var-lib-containers-storage-overlay-83b99a165ef096aa63c0fbdce38ef89103ce02f69805e5cbbb7129824d857969-merged.mount: Deactivated successfully.
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mirroring'
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'nfs'
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'orchestrator'
Jan 21 11:02:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:56.428+0000 7f1e4309e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 11:02:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:56.640+0000 7f1e4309e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_support'
Jan 21 11:02:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:56.718+0000 7f1e4309e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 11:02:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:56.785+0000 7f1e4309e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'progress'
Jan 21 11:02:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:56.871+0000 7f1e4309e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:56.940+0000 7f1e4309e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:02:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'prometheus'
Jan 21 11:02:57 np0005590810 ceph-mgr[74671]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:02:57 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rbd_support'
Jan 21 11:02:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:57.301+0000 7f1e4309e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:02:57 np0005590810 ceph-mgr[74671]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:02:57 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'restful'
Jan 21 11:02:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:57.400+0000 7f1e4309e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:02:57 np0005590810 chronyd[58461]: Selected source 147.189.136.126 (pool.ntp.org)
Jan 21 11:02:57 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rgw'
Jan 21 11:02:57 np0005590810 podman[74672]: 2026-01-21 16:02:57.70086707 +0000 UTC m=+4.377339488 container remove 8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349 (image=quay.io/ceph/ceph:v19, name=gifted_kalam, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:02:57 np0005590810 systemd[1]: libpod-conmon-8a1f5982a560409a08e4df455fb625f0636283e9bbf5f67829e9c9cf33886349.scope: Deactivated successfully.
Jan 21 11:02:57 np0005590810 ceph-mgr[74671]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:02:57 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rook'
Jan 21 11:02:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:57.835+0000 7f1e4309e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'selftest'
Jan 21 11:02:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:58.400+0000 7f1e4309e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'snap_schedule'
Jan 21 11:02:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:58.471+0000 7f1e4309e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'stats'
Jan 21 11:02:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:58.559+0000 7f1e4309e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'status'
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telegraf'
Jan 21 11:02:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:58.710+0000 7f1e4309e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telemetry'
Jan 21 11:02:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:58.798+0000 7f1e4309e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:02:58 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 11:02:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:58.966+0000 7f1e4309e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:02:59 np0005590810 ceph-mgr[74671]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:02:59 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'volumes'
Jan 21 11:02:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:59.191+0000 7f1e4309e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:02:59 np0005590810 ceph-mgr[74671]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:02:59 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'zabbix'
Jan 21 11:02:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:59.489+0000 7f1e4309e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:02:59 np0005590810 ceph-mgr[74671]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:02:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:02:59.574+0000 7f1e4309e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:02:59 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x5609d32b89c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 11:02:59 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ygffhs
Jan 21 11:02:59 np0005590810 podman[74757]: 2026-01-21 16:02:59.744359809 +0000 UTC m=+0.022180649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map Activating!
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map I am now activating
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ygffhs(active, starting, since 0.88584s)
Jan 21 11:03:00 np0005590810 podman[74757]: 2026-01-21 16:03:00.483843944 +0000 UTC m=+0.761664804 container create 87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308 (image=quay.io/ceph/ceph:v19, name=distracted_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"} v 0)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"}]: dispatch
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: Activating manager daemon compute-0.ygffhs
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Manager daemon compute-0.ygffhs is now available
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: balancer
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: crash
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [balancer INFO root] Starting
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: devicehealth
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Starting
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: iostat
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: nfs
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: orchestrator
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: pg_autoscaler
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:03:00
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [balancer INFO root] No pools available
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: progress
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [progress INFO root] Loading...
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [progress INFO root] No stored events to load
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded [] historic events
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 11:03:00 np0005590810 systemd[1]: Started libpod-conmon-87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308.scope.
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] recovery thread starting
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] starting setup
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: rbd_support
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: restful
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"} v 0)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: status
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [restful INFO root] server_addr: :: server_port: 8003
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [restful WARNING root] server not running: no certificate configured
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: telemetry
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] PerfHandler: starting
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TaskHandler: starting
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"} v 0)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] setup complete
Jan 21 11:03:00 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:00 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: volumes
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 21 11:03:00 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22cb4314819d85f2c5bbf6a10733e0b0397d34f650959dfb85bb183cc92f906/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:00 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22cb4314819d85f2c5bbf6a10733e0b0397d34f650959dfb85bb183cc92f906/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:00 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22cb4314819d85f2c5bbf6a10733e0b0397d34f650959dfb85bb183cc92f906/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:00 np0005590810 podman[74757]: 2026-01-21 16:03:00.578772179 +0000 UTC m=+0.856593019 container init 87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308 (image=quay.io/ceph/ceph:v19, name=distracted_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 21 11:03:00 np0005590810 podman[74757]: 2026-01-21 16:03:00.586908131 +0000 UTC m=+0.864728961 container start 87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308 (image=quay.io/ceph/ceph:v19, name=distracted_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:00 np0005590810 podman[74757]: 2026-01-21 16:03:00.591440411 +0000 UTC m=+0.869261241 container attach 87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308 (image=quay.io/ceph/ceph:v19, name=distracted_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 11:03:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646719056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]: 
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]: {
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "health": {
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "status": "HEALTH_OK",
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "checks": {},
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "mutes": []
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    },
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "election_epoch": 5,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "quorum": [
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        0
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    ],
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "quorum_names": [
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "compute-0"
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    ],
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "quorum_age": 8,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "monmap": {
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "epoch": 1,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "min_mon_release_name": "squid",
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_mons": 1
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    },
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "osdmap": {
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "epoch": 1,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_osds": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_up_osds": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "osd_up_since": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_in_osds": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "osd_in_since": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_remapped_pgs": 0
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    },
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "pgmap": {
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "pgs_by_state": [],
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_pgs": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_pools": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_objects": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "data_bytes": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "bytes_used": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "bytes_avail": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "bytes_total": 0
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    },
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "fsmap": {
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "epoch": 1,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "btime": "2026-01-21T16:02:49:724869+0000",
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "by_rank": [],
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "up:standby": 0
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    },
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "mgrmap": {
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "available": false,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "num_standbys": 0,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "modules": [
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:            "iostat",
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:            "nfs",
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:            "restful"
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        ],
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "services": {}
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    },
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "servicemap": {
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "epoch": 1,
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "modified": "2026-01-21T16:02:49.728816+0000",
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:        "services": {}
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    },
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]:    "progress_events": {}
Jan 21 11:03:00 np0005590810 distracted_archimedes[74817]: }
Jan 21 11:03:00 np0005590810 systemd[1]: libpod-87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308.scope: Deactivated successfully.
Jan 21 11:03:00 np0005590810 podman[74757]: 2026-01-21 16:03:00.796759322 +0000 UTC m=+1.074580152 container died 87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308 (image=quay.io/ceph/ceph:v19, name=distracted_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 21 11:03:00 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f22cb4314819d85f2c5bbf6a10733e0b0397d34f650959dfb85bb183cc92f906-merged.mount: Deactivated successfully.
Jan 21 11:03:00 np0005590810 podman[74757]: 2026-01-21 16:03:00.854307977 +0000 UTC m=+1.132128807 container remove 87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308 (image=quay.io/ceph/ceph:v19, name=distracted_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:03:00 np0005590810 systemd[1]: libpod-conmon-87167b27268402ecd675a5bcf295e2d723ab3a6d19c82d23422972ad97455308.scope: Deactivated successfully.
Jan 21 11:03:01 np0005590810 ceph-mon[74380]: Manager daemon compute-0.ygffhs is now available
Jan 21 11:03:01 np0005590810 ceph-mon[74380]: from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:03:01 np0005590810 ceph-mon[74380]: from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:03:01 np0005590810 ceph-mon[74380]: from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:01 np0005590810 ceph-mon[74380]: from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:01 np0005590810 ceph-mon[74380]: from='mgr.14102 192.168.122.100:0/2622420292' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:01 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ygffhs(active, since 1.93742s)
Jan 21 11:03:02 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:02 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ygffhs(active, since 2s)
Jan 21 11:03:02 np0005590810 podman[74892]: 2026-01-21 16:03:02.896969873 +0000 UTC m=+0.021030334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:03 np0005590810 podman[74892]: 2026-01-21 16:03:03.111372165 +0000 UTC m=+0.235432606 container create 3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457 (image=quay.io/ceph/ceph:v19, name=practical_diffie, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:03 np0005590810 systemd[1]: Started libpod-conmon-3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457.scope.
Jan 21 11:03:03 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7403d24f29cd0eb4a5797d27015db8ec1caef4fa2966ddbd32ac54f058fc1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7403d24f29cd0eb4a5797d27015db8ec1caef4fa2966ddbd32ac54f058fc1a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7403d24f29cd0eb4a5797d27015db8ec1caef4fa2966ddbd32ac54f058fc1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:03 np0005590810 podman[74892]: 2026-01-21 16:03:03.396264833 +0000 UTC m=+0.520325294 container init 3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457 (image=quay.io/ceph/ceph:v19, name=practical_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 21 11:03:03 np0005590810 podman[74892]: 2026-01-21 16:03:03.402126285 +0000 UTC m=+0.526186726 container start 3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457 (image=quay.io/ceph/ceph:v19, name=practical_diffie, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:03 np0005590810 podman[74892]: 2026-01-21 16:03:03.405537011 +0000 UTC m=+0.529597472 container attach 3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457 (image=quay.io/ceph/ceph:v19, name=practical_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:03:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 11:03:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427992070' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 11:03:03 np0005590810 practical_diffie[74908]: 
Jan 21 11:03:03 np0005590810 practical_diffie[74908]: {
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "health": {
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "status": "HEALTH_OK",
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "checks": {},
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "mutes": []
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    },
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "election_epoch": 5,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "quorum": [
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        0
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    ],
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "quorum_names": [
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "compute-0"
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    ],
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "quorum_age": 12,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "monmap": {
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "epoch": 1,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "min_mon_release_name": "squid",
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_mons": 1
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    },
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "osdmap": {
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "epoch": 1,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_osds": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_up_osds": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "osd_up_since": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_in_osds": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "osd_in_since": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_remapped_pgs": 0
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    },
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "pgmap": {
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "pgs_by_state": [],
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_pgs": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_pools": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_objects": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "data_bytes": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "bytes_used": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "bytes_avail": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "bytes_total": 0
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    },
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "fsmap": {
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "epoch": 1,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "btime": "2026-01-21T16:02:49:724869+0000",
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "by_rank": [],
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "up:standby": 0
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    },
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "mgrmap": {
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "available": true,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "num_standbys": 0,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "modules": [
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:            "iostat",
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:            "nfs",
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:            "restful"
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        ],
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "services": {}
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    },
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "servicemap": {
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "epoch": 1,
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "modified": "2026-01-21T16:02:49.728816+0000",
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:        "services": {}
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    },
Jan 21 11:03:03 np0005590810 practical_diffie[74908]:    "progress_events": {}
Jan 21 11:03:03 np0005590810 practical_diffie[74908]: }
Jan 21 11:03:03 np0005590810 systemd[1]: libpod-3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457.scope: Deactivated successfully.
Jan 21 11:03:03 np0005590810 podman[74892]: 2026-01-21 16:03:03.850269259 +0000 UTC m=+0.974329710 container died 3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457 (image=quay.io/ceph/ceph:v19, name=practical_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:03:04 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:05 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3c7403d24f29cd0eb4a5797d27015db8ec1caef4fa2966ddbd32ac54f058fc1a-merged.mount: Deactivated successfully.
Jan 21 11:03:06 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:06 np0005590810 podman[74892]: 2026-01-21 16:03:06.684334199 +0000 UTC m=+3.808394640 container remove 3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457 (image=quay.io/ceph/ceph:v19, name=practical_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 21 11:03:06 np0005590810 systemd[1]: libpod-conmon-3fbe5e59255583fda5e2f473bc941d0d1d0616326cda4d78a4cdd48e1c11f457.scope: Deactivated successfully.
Jan 21 11:03:06 np0005590810 podman[74947]: 2026-01-21 16:03:06.727870469 +0000 UTC m=+0.025330787 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:06 np0005590810 podman[74947]: 2026-01-21 16:03:06.876082757 +0000 UTC m=+0.173543095 container create a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c (image=quay.io/ceph/ceph:v19, name=infallible_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 21 11:03:07 np0005590810 systemd[1]: Started libpod-conmon-a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c.scope.
Jan 21 11:03:07 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafb915542ce5da2cf727985b48858a8fcdcf26ee9455dd2f2bb7b6f3228246a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafb915542ce5da2cf727985b48858a8fcdcf26ee9455dd2f2bb7b6f3228246a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafb915542ce5da2cf727985b48858a8fcdcf26ee9455dd2f2bb7b6f3228246a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafb915542ce5da2cf727985b48858a8fcdcf26ee9455dd2f2bb7b6f3228246a/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:07 np0005590810 podman[74947]: 2026-01-21 16:03:07.059450687 +0000 UTC m=+0.356911015 container init a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c (image=quay.io/ceph/ceph:v19, name=infallible_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:03:07 np0005590810 podman[74947]: 2026-01-21 16:03:07.064474432 +0000 UTC m=+0.361934730 container start a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c (image=quay.io/ceph/ceph:v19, name=infallible_swirles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 21 11:03:07 np0005590810 podman[74947]: 2026-01-21 16:03:07.08308991 +0000 UTC m=+0.380550228 container attach a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c (image=quay.io/ceph/ceph:v19, name=infallible_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 11:03:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2307736779' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 11:03:07 np0005590810 infallible_swirles[74964]: 
Jan 21 11:03:07 np0005590810 infallible_swirles[74964]: [global]
Jan 21 11:03:07 np0005590810 infallible_swirles[74964]: #011fsid = d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:03:07 np0005590810 infallible_swirles[74964]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 21 11:03:07 np0005590810 systemd[1]: libpod-a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c.scope: Deactivated successfully.
Jan 21 11:03:07 np0005590810 podman[74990]: 2026-01-21 16:03:07.440372415 +0000 UTC m=+0.021663954 container died a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c (image=quay.io/ceph/ceph:v19, name=infallible_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:07 np0005590810 systemd[1]: var-lib-containers-storage-overlay-aafb915542ce5da2cf727985b48858a8fcdcf26ee9455dd2f2bb7b6f3228246a-merged.mount: Deactivated successfully.
Jan 21 11:03:07 np0005590810 podman[74990]: 2026-01-21 16:03:07.484539105 +0000 UTC m=+0.065830614 container remove a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c (image=quay.io/ceph/ceph:v19, name=infallible_swirles, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 11:03:07 np0005590810 systemd[1]: libpod-conmon-a02ea970e5100bf5aee09d5006475381cd0380c880dfaaf7e45e4071ff66b20c.scope: Deactivated successfully.
Jan 21 11:03:07 np0005590810 podman[75004]: 2026-01-21 16:03:07.544381802 +0000 UTC m=+0.035543014 container create 983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16 (image=quay.io/ceph/ceph:v19, name=confident_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:03:07 np0005590810 systemd[1]: Started libpod-conmon-983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16.scope.
Jan 21 11:03:07 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32df3f3e16068aa98e8ea360f4098346b03e5bcf02048c315945c1c3cb7a148/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32df3f3e16068aa98e8ea360f4098346b03e5bcf02048c315945c1c3cb7a148/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32df3f3e16068aa98e8ea360f4098346b03e5bcf02048c315945c1c3cb7a148/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:07 np0005590810 podman[75004]: 2026-01-21 16:03:07.529191031 +0000 UTC m=+0.020352273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:07 np0005590810 podman[75004]: 2026-01-21 16:03:07.628511692 +0000 UTC m=+0.119672954 container init 983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16 (image=quay.io/ceph/ceph:v19, name=confident_lederberg, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 21 11:03:07 np0005590810 podman[75004]: 2026-01-21 16:03:07.632885318 +0000 UTC m=+0.124046530 container start 983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16 (image=quay.io/ceph/ceph:v19, name=confident_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 21 11:03:07 np0005590810 podman[75004]: 2026-01-21 16:03:07.636092248 +0000 UTC m=+0.127253510 container attach 983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16 (image=quay.io/ceph/ceph:v19, name=confident_lederberg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 11:03:07 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2307736779' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 11:03:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 21 11:03:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/802093173' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 21 11:03:08 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:08 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/802093173' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 21 11:03:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/802093173' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 21 11:03:08 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ygffhs(active, since 9s)
Jan 21 11:03:08 np0005590810 systemd[1]: libpod-983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16.scope: Deactivated successfully.
Jan 21 11:03:08 np0005590810 podman[75004]: 2026-01-21 16:03:08.946005088 +0000 UTC m=+1.437166320 container died 983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16 (image=quay.io/ceph/ceph:v19, name=confident_lederberg, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:08 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c32df3f3e16068aa98e8ea360f4098346b03e5bcf02048c315945c1c3cb7a148-merged.mount: Deactivated successfully.
Jan 21 11:03:08 np0005590810 podman[75004]: 2026-01-21 16:03:08.988544388 +0000 UTC m=+1.479705600 container remove 983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16 (image=quay.io/ceph/ceph:v19, name=confident_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:03:09 np0005590810 systemd[1]: libpod-conmon-983dc3579a10d8036a05f43eb0b7128df30fec3c30db52eb56d628dc843b4d16.scope: Deactivated successfully.
Jan 21 11:03:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setuser ceph since I am not root
Jan 21 11:03:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setgroup ceph since I am not root
Jan 21 11:03:09 np0005590810 podman[75055]: 2026-01-21 16:03:09.056488685 +0000 UTC m=+0.046425461 container create aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef (image=quay.io/ceph/ceph:v19, name=busy_germain, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:09 np0005590810 ceph-mgr[74671]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 21 11:03:09 np0005590810 ceph-mgr[74671]: pidfile_write: ignore empty --pid-file
Jan 21 11:03:09 np0005590810 systemd[1]: Started libpod-conmon-aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef.scope.
Jan 21 11:03:09 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'alerts'
Jan 21 11:03:09 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:09 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2302a5d45fa0e79cabc49555200b3371cb071227cbbd72cf827f7e39147aff2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:09 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2302a5d45fa0e79cabc49555200b3371cb071227cbbd72cf827f7e39147aff2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:09 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2302a5d45fa0e79cabc49555200b3371cb071227cbbd72cf827f7e39147aff2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:09 np0005590810 podman[75055]: 2026-01-21 16:03:09.032918844 +0000 UTC m=+0.022855640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:09.184+0000 7fdb69253140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:03:09 np0005590810 ceph-mgr[74671]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:03:09 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'balancer'
Jan 21 11:03:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:09.261+0000 7fdb69253140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:03:09 np0005590810 ceph-mgr[74671]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:03:09 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'cephadm'
Jan 21 11:03:09 np0005590810 podman[75055]: 2026-01-21 16:03:09.327327669 +0000 UTC m=+0.317264465 container init aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef (image=quay.io/ceph/ceph:v19, name=busy_germain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:03:09 np0005590810 podman[75055]: 2026-01-21 16:03:09.333079697 +0000 UTC m=+0.323016473 container start aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef (image=quay.io/ceph/ceph:v19, name=busy_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:03:09 np0005590810 podman[75055]: 2026-01-21 16:03:09.345472661 +0000 UTC m=+0.335409437 container attach aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef (image=quay.io/ceph/ceph:v19, name=busy_germain, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 21 11:03:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2590548664' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 21 11:03:09 np0005590810 busy_germain[75094]: {
Jan 21 11:03:09 np0005590810 busy_germain[75094]:    "epoch": 5,
Jan 21 11:03:09 np0005590810 busy_germain[75094]:    "available": true,
Jan 21 11:03:09 np0005590810 busy_germain[75094]:    "active_name": "compute-0.ygffhs",
Jan 21 11:03:09 np0005590810 busy_germain[75094]:    "num_standby": 0
Jan 21 11:03:09 np0005590810 busy_germain[75094]: }
Jan 21 11:03:09 np0005590810 systemd[1]: libpod-aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef.scope: Deactivated successfully.
Jan 21 11:03:09 np0005590810 podman[75055]: 2026-01-21 16:03:09.750412815 +0000 UTC m=+0.740349591 container died aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef (image=quay.io/ceph/ceph:v19, name=busy_germain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:09 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'crash'
Jan 21 11:03:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:10.065+0000 7fdb69253140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'dashboard'
Jan 21 11:03:10 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/802093173' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'devicehealth'
Jan 21 11:03:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:10.714+0000 7fdb69253140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 11:03:10 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a2302a5d45fa0e79cabc49555200b3371cb071227cbbd72cf827f7e39147aff2-merged.mount: Deactivated successfully.
Jan 21 11:03:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 11:03:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 11:03:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  from numpy import show_config as show_numpy_config
Jan 21 11:03:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:10.889+0000 7fdb69253140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'influx'
Jan 21 11:03:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:10.971+0000 7fdb69253140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:03:10 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'insights'
Jan 21 11:03:11 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'iostat'
Jan 21 11:03:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:11.111+0000 7fdb69253140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:03:11 np0005590810 ceph-mgr[74671]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:03:11 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'k8sevents'
Jan 21 11:03:11 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'localpool'
Jan 21 11:03:11 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 11:03:11 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mirroring'
Jan 21 11:03:11 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'nfs'
Jan 21 11:03:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:12.151+0000 7fdb69253140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'orchestrator'
Jan 21 11:03:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:12.370+0000 7fdb69253140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 11:03:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:12.449+0000 7fdb69253140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_support'
Jan 21 11:03:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:12.517+0000 7fdb69253140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 11:03:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:12.599+0000 7fdb69253140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'progress'
Jan 21 11:03:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:12.676+0000 7fdb69253140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:03:12 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'prometheus'
Jan 21 11:03:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:13.056+0000 7fdb69253140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:03:13 np0005590810 ceph-mgr[74671]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:03:13 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rbd_support'
Jan 21 11:03:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:13.154+0000 7fdb69253140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:03:13 np0005590810 ceph-mgr[74671]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:03:13 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'restful'
Jan 21 11:03:13 np0005590810 podman[75055]: 2026-01-21 16:03:13.213053906 +0000 UTC m=+4.202990722 container remove aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef (image=quay.io/ceph/ceph:v19, name=busy_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:13 np0005590810 systemd[1]: libpod-conmon-aa89ecf5eecc23ed54a0d87e6596025fe6481c5e74dad88e89de4b63abafa9ef.scope: Deactivated successfully.
Jan 21 11:03:13 np0005590810 podman[75145]: 2026-01-21 16:03:13.274830713 +0000 UTC m=+0.039935780 container create c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c (image=quay.io/ceph/ceph:v19, name=naughty_lichterman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:03:13 np0005590810 systemd[1]: Started libpod-conmon-c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c.scope.
Jan 21 11:03:13 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:13 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcb7b95e92eeee039c95562a85f2d86aedfbc284c950df34341635654e14f2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:13 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcb7b95e92eeee039c95562a85f2d86aedfbc284c950df34341635654e14f2e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:13 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcb7b95e92eeee039c95562a85f2d86aedfbc284c950df34341635654e14f2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:13 np0005590810 podman[75145]: 2026-01-21 16:03:13.34080792 +0000 UTC m=+0.105913017 container init c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c (image=quay.io/ceph/ceph:v19, name=naughty_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:13 np0005590810 podman[75145]: 2026-01-21 16:03:13.347360993 +0000 UTC m=+0.112466060 container start c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c (image=quay.io/ceph/ceph:v19, name=naughty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 11:03:13 np0005590810 podman[75145]: 2026-01-21 16:03:13.353109472 +0000 UTC m=+0.118214559 container attach c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c (image=quay.io/ceph/ceph:v19, name=naughty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:13 np0005590810 podman[75145]: 2026-01-21 16:03:13.257990481 +0000 UTC m=+0.023095568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:13 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rgw'
Jan 21 11:03:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:13.621+0000 7fdb69253140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:03:13 np0005590810 ceph-mgr[74671]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:03:13 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rook'
Jan 21 11:03:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:14.210+0000 7fdb69253140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'selftest'
Jan 21 11:03:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:14.284+0000 7fdb69253140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'snap_schedule'
Jan 21 11:03:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:14.372+0000 7fdb69253140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'stats'
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'status'
Jan 21 11:03:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:14.520+0000 7fdb69253140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telegraf'
Jan 21 11:03:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:14.592+0000 7fdb69253140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telemetry'
Jan 21 11:03:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:14.763+0000 7fdb69253140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:03:14 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 11:03:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:15.002+0000 7fdb69253140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'volumes'
Jan 21 11:03:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:15.276+0000 7fdb69253140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'zabbix'
Jan 21 11:03:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:03:15.349+0000 7fdb69253140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ygffhs restarted
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ygffhs
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x55db55f18d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map Activating!
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map I am now activating
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ygffhs(active, starting, since 0.44357s)
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"} v 0)
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"}]: dispatch
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: balancer
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] Starting
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Manager daemon compute-0.ygffhs is now available
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:03:15
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] No pools available
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 21 11:03:15 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 21 11:03:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: Active manager daemon compute-0.ygffhs restarted
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: Activating manager daemon compute-0.ygffhs
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: cephadm
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: crash
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: devicehealth
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Starting
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: iostat
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: nfs
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: orchestrator
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: pg_autoscaler
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: progress
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [progress INFO root] Loading...
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [progress INFO root] No stored events to load
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded [] historic events
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] recovery thread starting
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] starting setup
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: rbd_support
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: restful
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: status
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [restful INFO root] server_addr: :: server_port: 8003
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: telemetry
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"} v 0)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [restful WARNING root] server not running: no certificate configured
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] PerfHandler: starting
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TaskHandler: starting
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"} v 0)
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] setup complete
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: volumes
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 21 11:03:16 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ygffhs(active, since 1.2201s)
Jan 21 11:03:16 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 21 11:03:16 np0005590810 naughty_lichterman[75161]: {
Jan 21 11:03:16 np0005590810 naughty_lichterman[75161]:    "mgrmap_epoch": 7,
Jan 21 11:03:16 np0005590810 naughty_lichterman[75161]:    "initialized": true
Jan 21 11:03:16 np0005590810 naughty_lichterman[75161]: }
Jan 21 11:03:16 np0005590810 systemd[1]: libpod-c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c.scope: Deactivated successfully.
Jan 21 11:03:16 np0005590810 podman[75145]: 2026-01-21 16:03:16.595934953 +0000 UTC m=+3.361040020 container died c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c (image=quay.io/ceph/ceph:v19, name=naughty_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:16 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5bcb7b95e92eeee039c95562a85f2d86aedfbc284c950df34341635654e14f2e-merged.mount: Deactivated successfully.
Jan 21 11:03:16 np0005590810 podman[75145]: 2026-01-21 16:03:16.806142135 +0000 UTC m=+3.571247202 container remove c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c (image=quay.io/ceph/ceph:v19, name=naughty_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:03:16 np0005590810 systemd[1]: libpod-conmon-c397f60fd814648c116c5df1679c3385ca14f924b9ca1851fe1d3e9328e32a1c.scope: Deactivated successfully.
Jan 21 11:03:16 np0005590810 podman[75310]: 2026-01-21 16:03:16.865636561 +0000 UTC m=+0.038090792 container create eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc (image=quay.io/ceph/ceph:v19, name=hopeful_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:16 np0005590810 systemd[1]: Started libpod-conmon-eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc.scope.
Jan 21 11:03:16 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2455a0289fe06b6ac769fbbe03a0ced38e0a5ac6768b1fcd5fb3d48677b879ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2455a0289fe06b6ac769fbbe03a0ced38e0a5ac6768b1fcd5fb3d48677b879ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2455a0289fe06b6ac769fbbe03a0ced38e0a5ac6768b1fcd5fb3d48677b879ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:16 np0005590810 podman[75310]: 2026-01-21 16:03:16.85110217 +0000 UTC m=+0.023556441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Jan 21 11:03:17 np0005590810 podman[75310]: 2026-01-21 16:03:17.150937363 +0000 UTC m=+0.323391614 container init eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc (image=quay.io/ceph/ceph:v19, name=hopeful_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 11:03:17 np0005590810 podman[75310]: 2026-01-21 16:03:17.15602071 +0000 UTC m=+0.328474941 container start eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc (image=quay.io/ceph/ceph:v19, name=hopeful_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Jan 21 11:03:17 np0005590810 podman[75310]: 2026-01-21 16:03:17.16567585 +0000 UTC m=+0.338130111 container attach eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc (image=quay.io/ceph/ceph:v19, name=hopeful_murdock, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: Manager daemon compute-0.ygffhs is now available
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: Found migration_current of "None". Setting to last migration.
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:17 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 11:03:17 np0005590810 systemd[1]: libpod-eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc.scope: Deactivated successfully.
Jan 21 11:03:17 np0005590810 podman[75310]: 2026-01-21 16:03:17.535313448 +0000 UTC m=+0.707767679 container died eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc (image=quay.io/ceph/ceph:v19, name=hopeful_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923328 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:17 np0005590810 systemd[1]: var-lib-containers-storage-overlay-2455a0289fe06b6ac769fbbe03a0ced38e0a5ac6768b1fcd5fb3d48677b879ec-merged.mount: Deactivated successfully.
Jan 21 11:03:17 np0005590810 podman[75310]: 2026-01-21 16:03:17.893727889 +0000 UTC m=+1.066182120 container remove eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc (image=quay.io/ceph/ceph:v19, name=hopeful_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:03:17 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:18 np0005590810 podman[75364]: 2026-01-21 16:03:17.929781867 +0000 UTC m=+0.018497165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:18 np0005590810 podman[75364]: 2026-01-21 16:03:18.029583573 +0000 UTC m=+0.118298851 container create e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22 (image=quay.io/ceph/ceph:v19, name=stoic_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:03:18] ENGINE Bus STARTING
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:03:18] ENGINE Bus STARTING
Jan 21 11:03:18 np0005590810 systemd[1]: Started libpod-conmon-e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22.scope.
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:03:18] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:03:18] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:03:18 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83d629a95cfe6da46c4fa2ad19279f24e2d3612d93655d101fc9521329dc3e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83d629a95cfe6da46c4fa2ad19279f24e2d3612d93655d101fc9521329dc3e6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83d629a95cfe6da46c4fa2ad19279f24e2d3612d93655d101fc9521329dc3e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:18 np0005590810 podman[75364]: 2026-01-21 16:03:18.169539215 +0000 UTC m=+0.258254513 container init e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22 (image=quay.io/ceph/ceph:v19, name=stoic_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:03:18] ENGINE Bus STARTING
Jan 21 11:03:18 np0005590810 podman[75364]: 2026-01-21 16:03:18.175030606 +0000 UTC m=+0.263745884 container start e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22 (image=quay.io/ceph/ceph:v19, name=stoic_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:03:18 np0005590810 podman[75364]: 2026-01-21 16:03:18.178217575 +0000 UTC m=+0.266933203 container attach e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22 (image=quay.io/ceph/ceph:v19, name=stoic_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ygffhs(active, since 2s)
Jan 21 11:03:18 np0005590810 systemd[1]: libpod-conmon-eca824c5d657dabc15373ebc729e380edf72babd99c454e8c66f3cad94794ffc.scope: Deactivated successfully.
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:03:18] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:03:18] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:03:18] ENGINE Bus STARTED
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:03:18] ENGINE Bus STARTED
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:03:18] ENGINE Client ('192.168.122.100', 52094) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:03:18] ENGINE Client ('192.168.122.100', 52094) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Set ssh ssh_user
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 21 11:03:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Set ssh ssh_config
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 21 11:03:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 21 11:03:18 np0005590810 stoic_varahamihira[75391]: ssh user set to ceph-admin. sudo will be used
Jan 21 11:03:18 np0005590810 systemd[1]: libpod-e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22.scope: Deactivated successfully.
Jan 21 11:03:18 np0005590810 podman[75364]: 2026-01-21 16:03:18.856382576 +0000 UTC m=+0.945097944 container died e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22 (image=quay.io/ceph/ceph:v19, name=stoic_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:03:19 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c83d629a95cfe6da46c4fa2ad19279f24e2d3612d93655d101fc9521329dc3e6-merged.mount: Deactivated successfully.
Jan 21 11:03:19 np0005590810 podman[75364]: 2026-01-21 16:03:19.265694955 +0000 UTC m=+1.354410233 container remove e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22 (image=quay.io/ceph/ceph:v19, name=stoic_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:03:18] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:03:18] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:03:18] ENGINE Bus STARTED
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:03:18] ENGINE Client ('192.168.122.100', 52094) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: Set ssh ssh_user
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: Set ssh ssh_config
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: ssh user set to ceph-admin. sudo will be used
Jan 21 11:03:19 np0005590810 podman[75442]: 2026-01-21 16:03:19.31774697 +0000 UTC m=+0.034690078 container create a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d (image=quay.io/ceph/ceph:v19, name=stoic_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:03:19 np0005590810 systemd[1]: libpod-conmon-e563eef94ba9c01dfffcc08a572fd3f335f2b651c4ab291094f68f38cdb02e22.scope: Deactivated successfully.
Jan 21 11:03:19 np0005590810 systemd[1]: Started libpod-conmon-a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d.scope.
Jan 21 11:03:19 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7755e0f07525d8759c8c8d36b7f3d9bdce25091e80acc344a4e1b5254a1bed/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7755e0f07525d8759c8c8d36b7f3d9bdce25091e80acc344a4e1b5254a1bed/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7755e0f07525d8759c8c8d36b7f3d9bdce25091e80acc344a4e1b5254a1bed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7755e0f07525d8759c8c8d36b7f3d9bdce25091e80acc344a4e1b5254a1bed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7755e0f07525d8759c8c8d36b7f3d9bdce25091e80acc344a4e1b5254a1bed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 podman[75442]: 2026-01-21 16:03:19.375881743 +0000 UTC m=+0.092824841 container init a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d (image=quay.io/ceph/ceph:v19, name=stoic_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:03:19 np0005590810 podman[75442]: 2026-01-21 16:03:19.382999855 +0000 UTC m=+0.099942963 container start a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d (image=quay.io/ceph/ceph:v19, name=stoic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:03:19 np0005590810 podman[75442]: 2026-01-21 16:03:19.386308097 +0000 UTC m=+0.103251225 container attach a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d (image=quay.io/ceph/ceph:v19, name=stoic_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 11:03:19 np0005590810 podman[75442]: 2026-01-21 16:03:19.301458755 +0000 UTC m=+0.018401883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:19 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 21 11:03:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:19 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 21 11:03:19 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 21 11:03:19 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Set ssh private key
Jan 21 11:03:19 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 21 11:03:19 np0005590810 systemd[1]: libpod-a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d.scope: Deactivated successfully.
Jan 21 11:03:19 np0005590810 podman[75442]: 2026-01-21 16:03:19.761285831 +0000 UTC m=+0.478228969 container died a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d (image=quay.io/ceph/ceph:v19, name=stoic_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:03:19 np0005590810 systemd[1]: var-lib-containers-storage-overlay-cb7755e0f07525d8759c8c8d36b7f3d9bdce25091e80acc344a4e1b5254a1bed-merged.mount: Deactivated successfully.
Jan 21 11:03:19 np0005590810 podman[75442]: 2026-01-21 16:03:19.797788964 +0000 UTC m=+0.514732072 container remove a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d (image=quay.io/ceph/ceph:v19, name=stoic_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:19 np0005590810 systemd[1]: libpod-conmon-a2ffd61949d455a67a832adb4f7a4162f66902474e8c7af327f9aba6ec782b1d.scope: Deactivated successfully.
Jan 21 11:03:19 np0005590810 podman[75496]: 2026-01-21 16:03:19.855166243 +0000 UTC m=+0.039544428 container create c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5 (image=quay.io/ceph/ceph:v19, name=elastic_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:19 np0005590810 systemd[1]: Started libpod-conmon-c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5.scope.
Jan 21 11:03:19 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf24a6e6051849084e79453c7056c24360c259f4966b098fb840f503bf6d9b9/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf24a6e6051849084e79453c7056c24360c259f4966b098fb840f503bf6d9b9/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf24a6e6051849084e79453c7056c24360c259f4966b098fb840f503bf6d9b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf24a6e6051849084e79453c7056c24360c259f4966b098fb840f503bf6d9b9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf24a6e6051849084e79453c7056c24360c259f4966b098fb840f503bf6d9b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:19 np0005590810 podman[75496]: 2026-01-21 16:03:19.932781331 +0000 UTC m=+0.117159536 container init c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5 (image=quay.io/ceph/ceph:v19, name=elastic_ishizaka, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:03:19 np0005590810 podman[75496]: 2026-01-21 16:03:19.83765018 +0000 UTC m=+0.022028385 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:19 np0005590810 podman[75496]: 2026-01-21 16:03:19.94013377 +0000 UTC m=+0.124511955 container start c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5 (image=quay.io/ceph/ceph:v19, name=elastic_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:03:19 np0005590810 podman[75496]: 2026-01-21 16:03:19.943526026 +0000 UTC m=+0.127904241 container attach c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5 (image=quay.io/ceph/ceph:v19, name=elastic_ishizaka, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:03:19 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:20 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 21 11:03:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:20 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 21 11:03:20 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 21 11:03:20 np0005590810 systemd[1]: libpod-c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5.scope: Deactivated successfully.
Jan 21 11:03:20 np0005590810 podman[75496]: 2026-01-21 16:03:20.284085421 +0000 UTC m=+0.468463606 container died c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5 (image=quay.io/ceph/ceph:v19, name=elastic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:03:20 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ddf24a6e6051849084e79453c7056c24360c259f4966b098fb840f503bf6d9b9-merged.mount: Deactivated successfully.
Jan 21 11:03:20 np0005590810 podman[75496]: 2026-01-21 16:03:20.317861529 +0000 UTC m=+0.502239704 container remove c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5 (image=quay.io/ceph/ceph:v19, name=elastic_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 11:03:20 np0005590810 systemd[1]: libpod-conmon-c53f40eaf4790fec89dde29fc2d297017f79975a2815e60212d7b0409c54a2b5.scope: Deactivated successfully.
Jan 21 11:03:20 np0005590810 podman[75551]: 2026-01-21 16:03:20.370012857 +0000 UTC m=+0.034676126 container create 9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa (image=quay.io/ceph/ceph:v19, name=sleepy_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:20 np0005590810 systemd[1]: Started libpod-conmon-9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa.scope.
Jan 21 11:03:20 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1d1834dd350f6d3936642ff839183a3713d5d3e8db2954b4c3944afd81927/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1d1834dd350f6d3936642ff839183a3713d5d3e8db2954b4c3944afd81927/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1d1834dd350f6d3936642ff839183a3713d5d3e8db2954b4c3944afd81927/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:20 np0005590810 podman[75551]: 2026-01-21 16:03:20.421547776 +0000 UTC m=+0.086211075 container init 9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa (image=quay.io/ceph/ceph:v19, name=sleepy_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:03:20 np0005590810 podman[75551]: 2026-01-21 16:03:20.426376126 +0000 UTC m=+0.091039395 container start 9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa (image=quay.io/ceph/ceph:v19, name=sleepy_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:20 np0005590810 podman[75551]: 2026-01-21 16:03:20.42910069 +0000 UTC m=+0.093763989 container attach 9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa (image=quay.io/ceph/ceph:v19, name=sleepy_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 21 11:03:20 np0005590810 podman[75551]: 2026-01-21 16:03:20.354505796 +0000 UTC m=+0.019169065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:20 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:20 np0005590810 ceph-mon[74380]: Set ssh ssh_identity_key
Jan 21 11:03:20 np0005590810 ceph-mon[74380]: Set ssh private key
Jan 21 11:03:20 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:20 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:20 np0005590810 sleepy_payne[75568]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp9tTq9mjcwAxPY+bq6WAWEtt2Ay+YGgzMRC6Fcqs6mzcDuiDuQaAhuLhzn/a3AV/lU60oHo9Ue26KSOXMfB+MtWbJ4zW9AaS8HZlIwicK9UxiRo13/1lgTcmB0Tk01JoHZYuAQMSmRqgvAgFYqPUHbK4jfm31iFgWWp/xgUfTRXSC9PYf7TTKPyYGr2ri91YPWIWTfbsGAPy9sYH6igxqKAI78w3c9lN47x11z0QTokRHnnUdRycCWWD5PMXDm2/NA+CL9B2bc1GF6WqiLI6/C1ncnfmFi0Ae9gIotxpiokV11Fz89Rm2s92WIjP81+9Ny29/ETnW5adG424+qbvIFthFp37eTYhrfHbwefeMc7VILD01mu8Bl8GBvWUoii+soUL99DzkUwtsKF3IIo9GyfNvhLFuGiXAATvRL7WZ89wM7NGoWEthz4bTwKN0xcr3b3p07zvuJdwOBCvvANPbOnM8WNJMnyJMDmxucV2xnZbYyuD2N+t0M7kv0Mlxhzs= zuul@controller
Jan 21 11:03:20 np0005590810 systemd[1]: libpod-9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa.scope: Deactivated successfully.
Jan 21 11:03:20 np0005590810 podman[75551]: 2026-01-21 16:03:20.762415142 +0000 UTC m=+0.427078411 container died 9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa (image=quay.io/ceph/ceph:v19, name=sleepy_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:03:20 np0005590810 systemd[1]: var-lib-containers-storage-overlay-75a1d1834dd350f6d3936642ff839183a3713d5d3e8db2954b4c3944afd81927-merged.mount: Deactivated successfully.
Jan 21 11:03:20 np0005590810 podman[75551]: 2026-01-21 16:03:20.804674243 +0000 UTC m=+0.469337512 container remove 9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa (image=quay.io/ceph/ceph:v19, name=sleepy_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:03:20 np0005590810 systemd[1]: libpod-conmon-9654a9c072e051b12718cfac1e827cf06e8a932af0440abe30f55f15074747aa.scope: Deactivated successfully.
Jan 21 11:03:20 np0005590810 podman[75606]: 2026-01-21 16:03:20.859518014 +0000 UTC m=+0.037842585 container create e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35 (image=quay.io/ceph/ceph:v19, name=xenodochial_meitner, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:20 np0005590810 systemd[1]: Started libpod-conmon-e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35.scope.
Jan 21 11:03:20 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3766f3562b85edfdf57ffb3849fca7f4dabceabdecf9f99d2207641dba5a234c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3766f3562b85edfdf57ffb3849fca7f4dabceabdecf9f99d2207641dba5a234c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3766f3562b85edfdf57ffb3849fca7f4dabceabdecf9f99d2207641dba5a234c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:20 np0005590810 podman[75606]: 2026-01-21 16:03:20.924836961 +0000 UTC m=+0.103161532 container init e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35 (image=quay.io/ceph/ceph:v19, name=xenodochial_meitner, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:20 np0005590810 podman[75606]: 2026-01-21 16:03:20.929530936 +0000 UTC m=+0.107855507 container start e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35 (image=quay.io/ceph/ceph:v19, name=xenodochial_meitner, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:03:20 np0005590810 podman[75606]: 2026-01-21 16:03:20.932304232 +0000 UTC m=+0.110628833 container attach e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35 (image=quay.io/ceph/ceph:v19, name=xenodochial_meitner, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:03:20 np0005590810 podman[75606]: 2026-01-21 16:03:20.843007892 +0000 UTC m=+0.021332493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:21 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:21 np0005590810 systemd-logind[795]: New session 21 of user ceph-admin.
Jan 21 11:03:21 np0005590810 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 11:03:21 np0005590810 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 11:03:21 np0005590810 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 11:03:21 np0005590810 systemd[1]: Starting User Manager for UID 42477...
Jan 21 11:03:21 np0005590810 systemd[75652]: Queued start job for default target Main User Target.
Jan 21 11:03:21 np0005590810 systemd-logind[795]: New session 23 of user ceph-admin.
Jan 21 11:03:21 np0005590810 systemd[75652]: Created slice User Application Slice.
Jan 21 11:03:21 np0005590810 systemd[75652]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 11:03:21 np0005590810 systemd[75652]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 11:03:21 np0005590810 systemd[75652]: Reached target Paths.
Jan 21 11:03:21 np0005590810 systemd[75652]: Reached target Timers.
Jan 21 11:03:21 np0005590810 systemd[75652]: Starting D-Bus User Message Bus Socket...
Jan 21 11:03:21 np0005590810 systemd[75652]: Starting Create User's Volatile Files and Directories...
Jan 21 11:03:21 np0005590810 systemd[75652]: Finished Create User's Volatile Files and Directories.
Jan 21 11:03:21 np0005590810 systemd[75652]: Listening on D-Bus User Message Bus Socket.
Jan 21 11:03:21 np0005590810 systemd[75652]: Reached target Sockets.
Jan 21 11:03:21 np0005590810 systemd[75652]: Reached target Basic System.
Jan 21 11:03:21 np0005590810 systemd[75652]: Reached target Main User Target.
Jan 21 11:03:21 np0005590810 systemd[75652]: Startup finished in 119ms.
Jan 21 11:03:21 np0005590810 systemd[1]: Started User Manager for UID 42477.
Jan 21 11:03:21 np0005590810 systemd[1]: Started Session 21 of User ceph-admin.
Jan 21 11:03:21 np0005590810 systemd[1]: Started Session 23 of User ceph-admin.
Jan 21 11:03:21 np0005590810 ceph-mon[74380]: Set ssh ssh_identity_pub
Jan 21 11:03:21 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:21 np0005590810 systemd-logind[795]: New session 24 of user ceph-admin.
Jan 21 11:03:21 np0005590810 systemd[1]: Started Session 24 of User ceph-admin.
Jan 21 11:03:22 np0005590810 systemd-logind[795]: New session 25 of user ceph-admin.
Jan 21 11:03:22 np0005590810 systemd[1]: Started Session 25 of User ceph-admin.
Jan 21 11:03:22 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 21 11:03:22 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 21 11:03:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053087 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:22 np0005590810 systemd-logind[795]: New session 26 of user ceph-admin.
Jan 21 11:03:22 np0005590810 systemd[1]: Started Session 26 of User ceph-admin.
Jan 21 11:03:22 np0005590810 systemd-logind[795]: New session 27 of user ceph-admin.
Jan 21 11:03:22 np0005590810 systemd[1]: Started Session 27 of User ceph-admin.
Jan 21 11:03:23 np0005590810 systemd-logind[795]: New session 28 of user ceph-admin.
Jan 21 11:03:23 np0005590810 systemd[1]: Started Session 28 of User ceph-admin.
Jan 21 11:03:23 np0005590810 systemd-logind[795]: New session 29 of user ceph-admin.
Jan 21 11:03:23 np0005590810 systemd[1]: Started Session 29 of User ceph-admin.
Jan 21 11:03:23 np0005590810 ceph-mon[74380]: Deploying cephadm binary to compute-0
Jan 21 11:03:23 np0005590810 systemd-logind[795]: New session 30 of user ceph-admin.
Jan 21 11:03:23 np0005590810 systemd[1]: Started Session 30 of User ceph-admin.
Jan 21 11:03:23 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:24 np0005590810 systemd-logind[795]: New session 31 of user ceph-admin.
Jan 21 11:03:24 np0005590810 systemd[1]: Started Session 31 of User ceph-admin.
Jan 21 11:03:25 np0005590810 systemd-logind[795]: New session 32 of user ceph-admin.
Jan 21 11:03:25 np0005590810 systemd[1]: Started Session 32 of User ceph-admin.
Jan 21 11:03:25 np0005590810 systemd-logind[795]: New session 33 of user ceph-admin.
Jan 21 11:03:25 np0005590810 systemd[1]: Started Session 33 of User ceph-admin.
Jan 21 11:03:25 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 11:03:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:25 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Added host compute-0
Jan 21 11:03:25 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 21 11:03:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 11:03:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 11:03:26 np0005590810 xenodochial_meitner[75622]: Added host 'compute-0' with addr '192.168.122.100'
Jan 21 11:03:26 np0005590810 systemd[1]: libpod-e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35.scope: Deactivated successfully.
Jan 21 11:03:26 np0005590810 podman[76019]: 2026-01-21 16:03:26.076770464 +0000 UTC m=+0.037111223 container died e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35 (image=quay.io/ceph/ceph:v19, name=xenodochial_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:03:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3766f3562b85edfdf57ffb3849fca7f4dabceabdecf9f99d2207641dba5a234c-merged.mount: Deactivated successfully.
Jan 21 11:03:26 np0005590810 podman[76019]: 2026-01-21 16:03:26.114950188 +0000 UTC m=+0.075290927 container remove e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35 (image=quay.io/ceph/ceph:v19, name=xenodochial_meitner, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:26 np0005590810 systemd[1]: libpod-conmon-e3fd2d7fc80d5b476c33483f877f7c3f1b1ca579bd6aaadd67d863f3f8f79d35.scope: Deactivated successfully.
Jan 21 11:03:26 np0005590810 podman[76071]: 2026-01-21 16:03:26.180186012 +0000 UTC m=+0.039980081 container create eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca (image=quay.io/ceph/ceph:v19, name=thirsty_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:03:26 np0005590810 systemd[1]: Started libpod-conmon-eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca.scope.
Jan 21 11:03:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546d407dbd973f130422852938e4e3f9763959d240f6a47df28723fd2a8d70a3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546d407dbd973f130422852938e4e3f9763959d240f6a47df28723fd2a8d70a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546d407dbd973f130422852938e4e3f9763959d240f6a47df28723fd2a8d70a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:26 np0005590810 podman[76071]: 2026-01-21 16:03:26.162452392 +0000 UTC m=+0.022246491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:26 np0005590810 podman[76071]: 2026-01-21 16:03:26.272820077 +0000 UTC m=+0.132614146 container init eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca (image=quay.io/ceph/ceph:v19, name=thirsty_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:03:26 np0005590810 podman[76071]: 2026-01-21 16:03:26.283337583 +0000 UTC m=+0.143131652 container start eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca (image=quay.io/ceph/ceph:v19, name=thirsty_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:26 np0005590810 podman[76071]: 2026-01-21 16:03:26.286719628 +0000 UTC m=+0.146513707 container attach eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca (image=quay.io/ceph/ceph:v19, name=thirsty_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:03:26 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:26 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 21 11:03:26 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 21 11:03:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 11:03:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:26 np0005590810 thirsty_shamir[76087]: Scheduled mon update...
Jan 21 11:03:26 np0005590810 systemd[1]: libpod-eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca.scope: Deactivated successfully.
Jan 21 11:03:26 np0005590810 podman[76071]: 2026-01-21 16:03:26.660572706 +0000 UTC m=+0.520366775 container died eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca (image=quay.io/ceph/ceph:v19, name=thirsty_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:03:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-546d407dbd973f130422852938e4e3f9763959d240f6a47df28723fd2a8d70a3-merged.mount: Deactivated successfully.
Jan 21 11:03:26 np0005590810 podman[76071]: 2026-01-21 16:03:26.700754104 +0000 UTC m=+0.560548173 container remove eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca (image=quay.io/ceph/ceph:v19, name=thirsty_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:03:26 np0005590810 systemd[1]: libpod-conmon-eb71f9548441b1187374c7c0e6d8cce8423b2f63a6851686c02a541c37c8e5ca.scope: Deactivated successfully.
Jan 21 11:03:26 np0005590810 podman[76148]: 2026-01-21 16:03:26.772759217 +0000 UTC m=+0.049953880 container create 4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714 (image=quay.io/ceph/ceph:v19, name=nice_herschel, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 11:03:26 np0005590810 systemd[1]: Started libpod-conmon-4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714.scope.
Jan 21 11:03:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c391c67e986e3ca493c5e0f90c985a0b5030de7b8f9381e0377d2e5c0371854/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c391c67e986e3ca493c5e0f90c985a0b5030de7b8f9381e0377d2e5c0371854/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c391c67e986e3ca493c5e0f90c985a0b5030de7b8f9381e0377d2e5c0371854/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:26 np0005590810 podman[76148]: 2026-01-21 16:03:26.748429193 +0000 UTC m=+0.025623876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:26 np0005590810 podman[76148]: 2026-01-21 16:03:26.844527785 +0000 UTC m=+0.121722488 container init 4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714 (image=quay.io/ceph/ceph:v19, name=nice_herschel, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:26 np0005590810 podman[76148]: 2026-01-21 16:03:26.850039426 +0000 UTC m=+0.127234089 container start 4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714 (image=quay.io/ceph/ceph:v19, name=nice_herschel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:03:26 np0005590810 podman[76148]: 2026-01-21 16:03:26.853516053 +0000 UTC m=+0.130710716 container attach 4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714 (image=quay.io/ceph/ceph:v19, name=nice_herschel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 21 11:03:27 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:27 np0005590810 ceph-mon[74380]: Added host compute-0
Jan 21 11:03:27 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:27 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:27 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 21 11:03:27 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 21 11:03:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 11:03:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:27 np0005590810 nice_herschel[76164]: Scheduled mgr update...
Jan 21 11:03:27 np0005590810 systemd[1]: libpod-4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714.scope: Deactivated successfully.
Jan 21 11:03:27 np0005590810 podman[76148]: 2026-01-21 16:03:27.245539606 +0000 UTC m=+0.522734269 container died 4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714 (image=quay.io/ceph/ceph:v19, name=nice_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:03:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7c391c67e986e3ca493c5e0f90c985a0b5030de7b8f9381e0377d2e5c0371854-merged.mount: Deactivated successfully.
Jan 21 11:03:27 np0005590810 podman[76148]: 2026-01-21 16:03:27.286953971 +0000 UTC m=+0.564148634 container remove 4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714 (image=quay.io/ceph/ceph:v19, name=nice_herschel, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:27 np0005590810 systemd[1]: libpod-conmon-4ae4f352a41f28b07c2aeef0887c250d03757450215756d141f45a191ebe2714.scope: Deactivated successfully.
Jan 21 11:03:27 np0005590810 podman[76201]: 2026-01-21 16:03:27.34299274 +0000 UTC m=+0.037916018 container create db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c (image=quay.io/ceph/ceph:v19, name=jolly_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 11:03:27 np0005590810 systemd[1]: Started libpod-conmon-db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c.scope.
Jan 21 11:03:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141da153128758cf29bb7de90fd952438a7c96993b5a1a9ce3ea1b0646427fec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141da153128758cf29bb7de90fd952438a7c96993b5a1a9ce3ea1b0646427fec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141da153128758cf29bb7de90fd952438a7c96993b5a1a9ce3ea1b0646427fec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:27 np0005590810 podman[76201]: 2026-01-21 16:03:27.404858248 +0000 UTC m=+0.099781526 container init db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c (image=quay.io/ceph/ceph:v19, name=jolly_pare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:03:27 np0005590810 podman[76201]: 2026-01-21 16:03:27.41039251 +0000 UTC m=+0.105315788 container start db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c (image=quay.io/ceph/ceph:v19, name=jolly_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:27 np0005590810 podman[76201]: 2026-01-21 16:03:27.413181277 +0000 UTC m=+0.108104555 container attach db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c (image=quay.io/ceph/ceph:v19, name=jolly_pare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:03:27 np0005590810 podman[76201]: 2026-01-21 16:03:27.327175839 +0000 UTC m=+0.022099137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:27 np0005590810 podman[76103]: 2026-01-21 16:03:27.58991963 +0000 UTC m=+1.274736270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:27 np0005590810 podman[76256]: 2026-01-21 16:03:27.724494546 +0000 UTC m=+0.044998738 container create bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa (image=quay.io/ceph/ceph:v19, name=vibrant_engelbart, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 21 11:03:27 np0005590810 systemd[1]: Started libpod-conmon-bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa.scope.
Jan 21 11:03:27 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:27 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service crash spec with placement *
Jan 21 11:03:27 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 21 11:03:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 11:03:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:27 np0005590810 jolly_pare[76217]: Scheduled crash update...
Jan 21 11:03:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:27 np0005590810 podman[76256]: 2026-01-21 16:03:27.79844094 +0000 UTC m=+0.118945182 container init bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa (image=quay.io/ceph/ceph:v19, name=vibrant_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:03:27 np0005590810 podman[76256]: 2026-01-21 16:03:27.702327688 +0000 UTC m=+0.022831900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:27 np0005590810 systemd[1]: libpod-db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c.scope: Deactivated successfully.
Jan 21 11:03:27 np0005590810 podman[76201]: 2026-01-21 16:03:27.802174385 +0000 UTC m=+0.497097743 container died db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c (image=quay.io/ceph/ceph:v19, name=jolly_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 21 11:03:27 np0005590810 podman[76256]: 2026-01-21 16:03:27.813494447 +0000 UTC m=+0.133998639 container start bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa (image=quay.io/ceph/ceph:v19, name=vibrant_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:03:27 np0005590810 podman[76256]: 2026-01-21 16:03:27.81618259 +0000 UTC m=+0.136686792 container attach bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa (image=quay.io/ceph/ceph:v19, name=vibrant_engelbart, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:03:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-141da153128758cf29bb7de90fd952438a7c96993b5a1a9ce3ea1b0646427fec-merged.mount: Deactivated successfully.
Jan 21 11:03:27 np0005590810 podman[76201]: 2026-01-21 16:03:27.852374244 +0000 UTC m=+0.547297552 container remove db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c (image=quay.io/ceph/ceph:v19, name=jolly_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:27 np0005590810 systemd[1]: libpod-conmon-db490f5844e1d6ab0e7d40d77170bdacfb079ccbc5d67513ae0e2cc387895c3c.scope: Deactivated successfully.
Jan 21 11:03:27 np0005590810 vibrant_engelbart[76273]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 21 11:03:27 np0005590810 systemd[1]: libpod-bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa.scope: Deactivated successfully.
Jan 21 11:03:27 np0005590810 podman[76256]: 2026-01-21 16:03:27.905574394 +0000 UTC m=+0.226078586 container died bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa (image=quay.io/ceph/ceph:v19, name=vibrant_engelbart, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 11:03:27 np0005590810 podman[76287]: 2026-01-21 16:03:27.935149762 +0000 UTC m=+0.051196040 container create 70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31 (image=quay.io/ceph/ceph:v19, name=modest_shaw, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:27 np0005590810 podman[76256]: 2026-01-21 16:03:27.958738264 +0000 UTC m=+0.279242456 container remove bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa (image=quay.io/ceph/ceph:v19, name=vibrant_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:03:27 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:27 np0005590810 systemd[1]: Started libpod-conmon-70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31.scope.
Jan 21 11:03:27 np0005590810 systemd[1]: libpod-conmon-bd322b2ff61b36f07e03687338d0a4ed77dc334c7820fb195c4a8af2de62b5fa.scope: Deactivated successfully.
Jan 21 11:03:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef03c488c192de25aa3de100ce408828741bc3369c8dd2648fd0519041bf2ae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef03c488c192de25aa3de100ce408828741bc3369c8dd2648fd0519041bf2ae3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef03c488c192de25aa3de100ce408828741bc3369c8dd2648fd0519041bf2ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: Saving service mon spec with placement count:5
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:28 np0005590810 podman[76287]: 2026-01-21 16:03:27.91092027 +0000 UTC m=+0.026966508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:28 np0005590810 podman[76287]: 2026-01-21 16:03:28.00730878 +0000 UTC m=+0.123355078 container init 70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31 (image=quay.io/ceph/ceph:v19, name=modest_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:28 np0005590810 podman[76287]: 2026-01-21 16:03:28.012400068 +0000 UTC m=+0.128446306 container start 70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31 (image=quay.io/ceph/ceph:v19, name=modest_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:28 np0005590810 podman[76287]: 2026-01-21 16:03:28.016300259 +0000 UTC m=+0.132346517 container attach 70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31 (image=quay.io/ceph/ceph:v19, name=modest_shaw, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:28 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b1c59844779540df4e163012b3676f40b6c010aa2d3c42e1c13f92c883236712-merged.mount: Deactivated successfully.
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2767758917' entity='client.admin' 
Jan 21 11:03:28 np0005590810 systemd[1]: libpod-70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31.scope: Deactivated successfully.
Jan 21 11:03:28 np0005590810 podman[76287]: 2026-01-21 16:03:28.385610237 +0000 UTC m=+0.501656465 container died 70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31 (image=quay.io/ceph/ceph:v19, name=modest_shaw, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:28 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ef03c488c192de25aa3de100ce408828741bc3369c8dd2648fd0519041bf2ae3-merged.mount: Deactivated successfully.
Jan 21 11:03:28 np0005590810 podman[76287]: 2026-01-21 16:03:28.425427603 +0000 UTC m=+0.541473841 container remove 70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31 (image=quay.io/ceph/ceph:v19, name=modest_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:28 np0005590810 systemd[1]: libpod-conmon-70bf53cbcf93f989dbeb37fee9ef97aabd854dac4ea1dde780d1cefa93131b31.scope: Deactivated successfully.
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:28 np0005590810 podman[76430]: 2026-01-21 16:03:28.485724813 +0000 UTC m=+0.039277959 container create 47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1 (image=quay.io/ceph/ceph:v19, name=cranky_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:28 np0005590810 systemd[1]: Started libpod-conmon-47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1.scope.
Jan 21 11:03:28 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc52c81e0d106c33d050c22704f48590c53a095c0c32a368c0f5d4bab974e2bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc52c81e0d106c33d050c22704f48590c53a095c0c32a368c0f5d4bab974e2bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc52c81e0d106c33d050c22704f48590c53a095c0c32a368c0f5d4bab974e2bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:28 np0005590810 podman[76430]: 2026-01-21 16:03:28.467768867 +0000 UTC m=+0.021322033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:28 np0005590810 podman[76430]: 2026-01-21 16:03:28.572736203 +0000 UTC m=+0.126289369 container init 47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1 (image=quay.io/ceph/ceph:v19, name=cranky_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:03:28 np0005590810 podman[76430]: 2026-01-21 16:03:28.579052869 +0000 UTC m=+0.132606015 container start 47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1 (image=quay.io/ceph/ceph:v19, name=cranky_noether, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:28 np0005590810 podman[76430]: 2026-01-21 16:03:28.582874608 +0000 UTC m=+0.136427844 container attach 47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1 (image=quay.io/ceph/ceph:v19, name=cranky_noether, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:28 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 21 11:03:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:29 np0005590810 systemd[1]: libpod-47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1.scope: Deactivated successfully.
Jan 21 11:03:29 np0005590810 podman[76430]: 2026-01-21 16:03:29.014335435 +0000 UTC m=+0.567888581 container died 47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1 (image=quay.io/ceph/ceph:v19, name=cranky_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: Saving service mgr spec with placement count:2
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: Saving service crash spec with placement *
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2767758917' entity='client.admin' 
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:29 np0005590810 systemd[1]: var-lib-containers-storage-overlay-bc52c81e0d106c33d050c22704f48590c53a095c0c32a368c0f5d4bab974e2bd-merged.mount: Deactivated successfully.
Jan 21 11:03:29 np0005590810 podman[76430]: 2026-01-21 16:03:29.068402011 +0000 UTC m=+0.621955157 container remove 47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1 (image=quay.io/ceph/ceph:v19, name=cranky_noether, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:29 np0005590810 systemd[1]: libpod-conmon-47d8d29c0ed4cb2078d1caa53d1469eb6a0d4439d43281324ce2ebd67a3d99c1.scope: Deactivated successfully.
Jan 21 11:03:29 np0005590810 podman[76586]: 2026-01-21 16:03:29.124864153 +0000 UTC m=+0.037360750 container create 1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5 (image=quay.io/ceph/ceph:v19, name=blissful_dijkstra, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:03:29 np0005590810 systemd[1]: Started libpod-conmon-1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5.scope.
Jan 21 11:03:29 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef56e7463cb11d6a1990a056f051f2cbf3111cd2de05537101e463d009a81caf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef56e7463cb11d6a1990a056f051f2cbf3111cd2de05537101e463d009a81caf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef56e7463cb11d6a1990a056f051f2cbf3111cd2de05537101e463d009a81caf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:29 np0005590810 podman[76586]: 2026-01-21 16:03:29.10862792 +0000 UTC m=+0.021124547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:29 np0005590810 podman[76586]: 2026-01-21 16:03:29.209348665 +0000 UTC m=+0.121845392 container init 1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5 (image=quay.io/ceph/ceph:v19, name=blissful_dijkstra, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:03:29 np0005590810 podman[76586]: 2026-01-21 16:03:29.214564816 +0000 UTC m=+0.127061423 container start 1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5 (image=quay.io/ceph/ceph:v19, name=blissful_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 11:03:29 np0005590810 podman[76586]: 2026-01-21 16:03:29.219975184 +0000 UTC m=+0.132471811 container attach 1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5 (image=quay.io/ceph/ceph:v19, name=blissful_dijkstra, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:29 np0005590810 podman[76619]: 2026-01-21 16:03:29.225343031 +0000 UTC m=+0.060342513 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:03:29 np0005590810 podman[76619]: 2026-01-21 16:03:29.324011272 +0000 UTC m=+0.159010734 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:29 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 11:03:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:29 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Added label _admin to host compute-0
Jan 21 11:03:29 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 21 11:03:29 np0005590810 blissful_dijkstra[76625]: Added label _admin to host compute-0
Jan 21 11:03:29 np0005590810 systemd[1]: libpod-1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5.scope: Deactivated successfully.
Jan 21 11:03:29 np0005590810 podman[76744]: 2026-01-21 16:03:29.664335891 +0000 UTC m=+0.027732162 container died 1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5 (image=quay.io/ceph/ceph:v19, name=blissful_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:03:29 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ef56e7463cb11d6a1990a056f051f2cbf3111cd2de05537101e463d009a81caf-merged.mount: Deactivated successfully.
Jan 21 11:03:29 np0005590810 podman[76744]: 2026-01-21 16:03:29.707053866 +0000 UTC m=+0.070450097 container remove 1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5 (image=quay.io/ceph/ceph:v19, name=blissful_dijkstra, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:03:29 np0005590810 systemd[1]: libpod-conmon-1b67d390426f9d58fea3f6a552867e05c7a869265873ffc14afd4b6cd3de53e5.scope: Deactivated successfully.
Jan 21 11:03:29 np0005590810 podman[76760]: 2026-01-21 16:03:29.795366827 +0000 UTC m=+0.050718475 container create ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b (image=quay.io/ceph/ceph:v19, name=distracted_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:03:29 np0005590810 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76786 (sysctl)
Jan 21 11:03:29 np0005590810 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 21 11:03:29 np0005590810 systemd[1]: Started libpod-conmon-ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b.scope.
Jan 21 11:03:29 np0005590810 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 21 11:03:29 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6301bdb2c7fdd2e0d9d23f518e94cba6a0d67ee0542eefbd55766d0c87659654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6301bdb2c7fdd2e0d9d23f518e94cba6a0d67ee0542eefbd55766d0c87659654/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6301bdb2c7fdd2e0d9d23f518e94cba6a0d67ee0542eefbd55766d0c87659654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:29 np0005590810 podman[76760]: 2026-01-21 16:03:29.774896231 +0000 UTC m=+0.030247899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:29 np0005590810 podman[76760]: 2026-01-21 16:03:29.877824195 +0000 UTC m=+0.133175883 container init ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b (image=quay.io/ceph/ceph:v19, name=distracted_spence, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:03:29 np0005590810 podman[76760]: 2026-01-21 16:03:29.886180644 +0000 UTC m=+0.141532312 container start ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b (image=quay.io/ceph/ceph:v19, name=distracted_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:03:29 np0005590810 podman[76760]: 2026-01-21 16:03:29.890367374 +0000 UTC m=+0.145719072 container attach ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b (image=quay.io/ceph/ceph:v19, name=distracted_spence, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:29 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:30 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:30 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 21 11:03:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/216413809' entity='client.admin' 
Jan 21 11:03:30 np0005590810 distracted_spence[76792]: set mgr/dashboard/cluster/status
Jan 21 11:03:30 np0005590810 systemd[1]: libpod-ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b.scope: Deactivated successfully.
Jan 21 11:03:30 np0005590810 podman[76760]: 2026-01-21 16:03:30.36180714 +0000 UTC m=+0.617158788 container died ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b (image=quay.io/ceph/ceph:v19, name=distracted_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:30 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6301bdb2c7fdd2e0d9d23f518e94cba6a0d67ee0542eefbd55766d0c87659654-merged.mount: Deactivated successfully.
Jan 21 11:03:30 np0005590810 podman[76760]: 2026-01-21 16:03:30.400523811 +0000 UTC m=+0.655875459 container remove ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b (image=quay.io/ceph/ceph:v19, name=distracted_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:03:30 np0005590810 systemd[1]: libpod-conmon-ddd82a87b09a29333e3546a10fd5d59c53f4de5ae11be17218072bbcc379b88b.scope: Deactivated successfully.
Jan 21 11:03:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:30 np0005590810 python3[76991]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:03:30 np0005590810 podman[77032]: 2026-01-21 16:03:30.996918035 +0000 UTC m=+0.053614784 container create e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_clarke, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 21 11:03:31 np0005590810 ceph-mon[74380]: Added label _admin to host compute-0
Jan 21 11:03:31 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/216413809' entity='client.admin' 
Jan 21 11:03:31 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:31 np0005590810 systemd[1]: Started libpod-conmon-e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357.scope.
Jan 21 11:03:31 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:31 np0005590810 podman[77039]: 2026-01-21 16:03:31.056672869 +0000 UTC m=+0.095121432 container create 2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e (image=quay.io/ceph/ceph:v19, name=condescending_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:31 np0005590810 podman[77032]: 2026-01-21 16:03:31.061585052 +0000 UTC m=+0.118281821 container init e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:31 np0005590810 podman[77032]: 2026-01-21 16:03:30.966162181 +0000 UTC m=+0.022858950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:03:31 np0005590810 podman[77032]: 2026-01-21 16:03:31.072576462 +0000 UTC m=+0.129273211 container start e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:03:31 np0005590810 podman[77032]: 2026-01-21 16:03:31.077250758 +0000 UTC m=+0.133947537 container attach e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_clarke, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 21 11:03:31 np0005590810 systemd[1]: libpod-e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357.scope: Deactivated successfully.
Jan 21 11:03:31 np0005590810 dreamy_clarke[77059]: 167 167
Jan 21 11:03:31 np0005590810 podman[77032]: 2026-01-21 16:03:31.07861338 +0000 UTC m=+0.135310129 container died e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 11:03:31 np0005590810 systemd[1]: Started libpod-conmon-2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e.scope.
Jan 21 11:03:31 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ddf149c7981cf70f9b1b226c91d8d985dca03499b77082f402f51dc2ce6eb825-merged.mount: Deactivated successfully.
Jan 21 11:03:31 np0005590810 podman[77032]: 2026-01-21 16:03:31.112366147 +0000 UTC m=+0.169062896 container remove e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_clarke, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:31 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205cae98277b08f90133924c466a7c3c754f6f17b90aea78be03858838b06b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205cae98277b08f90133924c466a7c3c754f6f17b90aea78be03858838b06b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:31 np0005590810 podman[77039]: 2026-01-21 16:03:31.026427481 +0000 UTC m=+0.064876074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:31 np0005590810 podman[77039]: 2026-01-21 16:03:31.128602681 +0000 UTC m=+0.167051244 container init 2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e (image=quay.io/ceph/ceph:v19, name=condescending_mclaren, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 11:03:31 np0005590810 podman[77039]: 2026-01-21 16:03:31.133683928 +0000 UTC m=+0.172132491 container start 2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e (image=quay.io/ceph/ceph:v19, name=condescending_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:31 np0005590810 podman[77039]: 2026-01-21 16:03:31.13597208 +0000 UTC m=+0.174420633 container attach 2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e (image=quay.io/ceph/ceph:v19, name=condescending_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 21 11:03:31 np0005590810 systemd[1]: libpod-conmon-e23a98a90058fbdc074b36e3ab3140b52a38100f727db2d8105582bbd94d5357.scope: Deactivated successfully.
Jan 21 11:03:31 np0005590810 podman[77094]: 2026-01-21 16:03:31.272861517 +0000 UTC m=+0.037614378 container create f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:03:31 np0005590810 systemd[1]: Started libpod-conmon-f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c.scope.
Jan 21 11:03:31 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74bcd31e7e304574d7a920ab503ac84b4f86f7d6f1add1663c1b90a3dee8764/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74bcd31e7e304574d7a920ab503ac84b4f86f7d6f1add1663c1b90a3dee8764/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74bcd31e7e304574d7a920ab503ac84b4f86f7d6f1add1663c1b90a3dee8764/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74bcd31e7e304574d7a920ab503ac84b4f86f7d6f1add1663c1b90a3dee8764/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:31 np0005590810 podman[77094]: 2026-01-21 16:03:31.342659082 +0000 UTC m=+0.107411953 container init f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:03:31 np0005590810 podman[77094]: 2026-01-21 16:03:31.349803404 +0000 UTC m=+0.114556265 container start f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:03:31 np0005590810 podman[77094]: 2026-01-21 16:03:31.255411135 +0000 UTC m=+0.020164036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:03:31 np0005590810 podman[77094]: 2026-01-21 16:03:31.358542185 +0000 UTC m=+0.123295066 container attach f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:03:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 21 11:03:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/844854691' entity='client.admin' 
Jan 21 11:03:31 np0005590810 systemd[1]: libpod-2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e.scope: Deactivated successfully.
Jan 21 11:03:31 np0005590810 podman[77039]: 2026-01-21 16:03:31.492073048 +0000 UTC m=+0.530521611 container died 2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e (image=quay.io/ceph/ceph:v19, name=condescending_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:03:31 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a205cae98277b08f90133924c466a7c3c754f6f17b90aea78be03858838b06b9-merged.mount: Deactivated successfully.
Jan 21 11:03:31 np0005590810 podman[77039]: 2026-01-21 16:03:31.52632204 +0000 UTC m=+0.564770603 container remove 2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e (image=quay.io/ceph/ceph:v19, name=condescending_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 11:03:31 np0005590810 systemd[1]: libpod-conmon-2d192bd81becef81a76357b8f106e6cd235bfca62ccac9d1cc6c837db1da993e.scope: Deactivated successfully.
Jan 21 11:03:31 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:32 np0005590810 brave_jemison[77127]: [
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:    {
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "available": false,
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "being_replaced": false,
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "ceph_device_lvm": false,
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "lsm_data": {},
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "lvs": [],
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "path": "/dev/sr0",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "rejected_reasons": [
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "Has a FileSystem",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "Insufficient space (<5GB)"
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        ],
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        "sys_api": {
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "actuators": null,
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "device_nodes": [
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:                "sr0"
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            ],
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "devname": "sr0",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "human_readable_size": "482.00 KB",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "id_bus": "ata",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "model": "QEMU DVD-ROM",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "nr_requests": "2",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "parent": "/dev/sr0",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "partitions": {},
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "path": "/dev/sr0",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "removable": "1",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "rev": "2.5+",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "ro": "0",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "rotational": "1",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "sas_address": "",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "sas_device_handle": "",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "scheduler_mode": "mq-deadline",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "sectors": 0,
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "sectorsize": "2048",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "size": 493568.0,
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "support_discard": "2048",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "type": "disk",
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:            "vendor": "QEMU"
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:        }
Jan 21 11:03:32 np0005590810 brave_jemison[77127]:    }
Jan 21 11:03:32 np0005590810 brave_jemison[77127]: ]
Jan 21 11:03:32 np0005590810 systemd[1]: libpod-f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c.scope: Deactivated successfully.
Jan 21 11:03:32 np0005590810 podman[77094]: 2026-01-21 16:03:32.128093731 +0000 UTC m=+0.892846602 container died f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 11:03:32 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a74bcd31e7e304574d7a920ab503ac84b4f86f7d6f1add1663c1b90a3dee8764-merged.mount: Deactivated successfully.
Jan 21 11:03:32 np0005590810 podman[77094]: 2026-01-21 16:03:32.165975157 +0000 UTC m=+0.930728018 container remove f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:32 np0005590810 systemd[1]: libpod-conmon-f3cfaa71bbe86ce3adb49057a5ec5c1ca41a275d8973f88b892bfbcb3c873c3c.scope: Deactivated successfully.
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:03:32 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:03:32 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:03:32 np0005590810 ansible-async_wrapper.py[78496]: Invoked with j49938343027 30 /home/zuul/.ansible/tmp/ansible-tmp-1769011411.8834326-37250-273553523276250/AnsiballZ_command.py _
Jan 21 11:03:32 np0005590810 ansible-async_wrapper.py[78552]: Starting module and watcher
Jan 21 11:03:32 np0005590810 ansible-async_wrapper.py[78552]: Start watching 78554 (30)
Jan 21 11:03:32 np0005590810 ansible-async_wrapper.py[78554]: Start module (78554)
Jan 21 11:03:32 np0005590810 ansible-async_wrapper.py[78496]: Return async_wrapper task started.
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/844854691' entity='client.admin' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:03:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:32 np0005590810 python3[78558]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:03:32 np0005590810 podman[78625]: 2026-01-21 16:03:32.670428838 +0000 UTC m=+0.039389493 container create 60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a (image=quay.io/ceph/ceph:v19, name=boring_morse, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:03:32 np0005590810 systemd[1]: Started libpod-conmon-60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a.scope.
Jan 21 11:03:32 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2ea1656d640fed881f6a5b9ac7a15ca01d85c56666c6cc2463b58e9ce0adc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2ea1656d640fed881f6a5b9ac7a15ca01d85c56666c6cc2463b58e9ce0adc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:32 np0005590810 podman[78625]: 2026-01-21 16:03:32.654187083 +0000 UTC m=+0.023147768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:32 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:03:32 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:03:32 np0005590810 podman[78625]: 2026-01-21 16:03:32.755205697 +0000 UTC m=+0.124166382 container init 60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a (image=quay.io/ceph/ceph:v19, name=boring_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:32 np0005590810 podman[78625]: 2026-01-21 16:03:32.763257887 +0000 UTC m=+0.132218552 container start 60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a (image=quay.io/ceph/ceph:v19, name=boring_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:32 np0005590810 podman[78625]: 2026-01-21 16:03:32.767081836 +0000 UTC m=+0.136042501 container attach 60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a (image=quay.io/ceph/ceph:v19, name=boring_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:33 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 11:03:33 np0005590810 boring_morse[78682]: 
Jan 21 11:03:33 np0005590810 boring_morse[78682]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 11:03:33 np0005590810 systemd[1]: libpod-60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a.scope: Deactivated successfully.
Jan 21 11:03:33 np0005590810 podman[78625]: 2026-01-21 16:03:33.15686392 +0000 UTC m=+0.525824605 container died 60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a (image=quay.io/ceph/ceph:v19, name=boring_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:03:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-da2ea1656d640fed881f6a5b9ac7a15ca01d85c56666c6cc2463b58e9ce0adc0-merged.mount: Deactivated successfully.
Jan 21 11:03:33 np0005590810 podman[78625]: 2026-01-21 16:03:33.189927715 +0000 UTC m=+0.558888380 container remove 60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a (image=quay.io/ceph/ceph:v19, name=boring_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:33 np0005590810 systemd[1]: libpod-conmon-60907b7db13a01635a07d3c66911790e77dbb2fef55182e72163b0fc5069cb6a.scope: Deactivated successfully.
Jan 21 11:03:33 np0005590810 ansible-async_wrapper.py[78554]: Module complete (78554)
Jan 21 11:03:33 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:03:33 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:03:33 np0005590810 ceph-mon[74380]: Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:03:33 np0005590810 ceph-mon[74380]: Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:03:33 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:03:33 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:03:33 np0005590810 python3[79247]: ansible-ansible.legacy.async_status Invoked with jid=j49938343027.78496 mode=status _async_dir=/root/.ansible_async
Jan 21 11:03:33 np0005590810 ceph-mgr[74671]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 11:03:34 np0005590810 python3[79408]: ansible-ansible.legacy.async_status Invoked with jid=j49938343027.78496 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:34 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev da9162c9-6735-4c2f-8a12-2defe979a827 (Updating crash deployment (+1 -> 1))
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:03:34 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 21 11:03:34 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:03:34 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 11:03:34 np0005590810 python3[79569]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 11:03:34 np0005590810 podman[79612]: 2026-01-21 16:03:34.762364681 +0000 UTC m=+0.040119545 container create 7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swirles, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:03:34 np0005590810 systemd[1]: Started libpod-conmon-7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b.scope.
Jan 21 11:03:34 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:34 np0005590810 podman[79612]: 2026-01-21 16:03:34.837031628 +0000 UTC m=+0.114786492 container init 7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swirles, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:34 np0005590810 podman[79612]: 2026-01-21 16:03:34.746822079 +0000 UTC m=+0.024576963 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:03:34 np0005590810 podman[79612]: 2026-01-21 16:03:34.843726085 +0000 UTC m=+0.121480949 container start 7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 11:03:34 np0005590810 podman[79612]: 2026-01-21 16:03:34.847298297 +0000 UTC m=+0.125053161 container attach 7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 11:03:34 np0005590810 cool_swirles[79629]: 167 167
Jan 21 11:03:34 np0005590810 systemd[1]: libpod-7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b.scope: Deactivated successfully.
Jan 21 11:03:34 np0005590810 podman[79612]: 2026-01-21 16:03:34.84967045 +0000 UTC m=+0.127425324 container died 7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swirles, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:03:34 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7d0af54416fc2bdd684ac4d029717ba2776cda046f813ca159758cffc509dd32-merged.mount: Deactivated successfully.
Jan 21 11:03:34 np0005590810 podman[79612]: 2026-01-21 16:03:34.885312186 +0000 UTC m=+0.163067050 container remove 7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Jan 21 11:03:34 np0005590810 systemd[1]: libpod-conmon-7b6569c4c74e128d50e99f0aa7427628cf51ca99683614db57926324e9d0f79b.scope: Deactivated successfully.
Jan 21 11:03:34 np0005590810 systemd[1]: Reloading.
Jan 21 11:03:35 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:03:35 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:03:35 np0005590810 systemd[1]: Reloading.
Jan 21 11:03:35 np0005590810 python3[79708]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:03:35 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:03:35 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:03:35 np0005590810 podman[79746]: 2026-01-21 16:03:35.384222355 +0000 UTC m=+0.044351017 container create 33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038 (image=quay.io/ceph/ceph:v19, name=laughing_snyder, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:03:35 np0005590810 podman[79746]: 2026-01-21 16:03:35.364190323 +0000 UTC m=+0.024319015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: Deploying daemon crash.compute-0 on compute-0
Jan 21 11:03:35 np0005590810 systemd[1]: Started libpod-conmon-33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038.scope.
Jan 21 11:03:35 np0005590810 systemd[1]: Starting Ceph crash.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:03:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ff1827b1beccbb993ab9a219f4b0e80db9e50cdf08eea8bc5e58dc8718c202/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ff1827b1beccbb993ab9a219f4b0e80db9e50cdf08eea8bc5e58dc8718c202/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ff1827b1beccbb993ab9a219f4b0e80db9e50cdf08eea8bc5e58dc8718c202/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:35 np0005590810 podman[79746]: 2026-01-21 16:03:35.559248566 +0000 UTC m=+0.219377278 container init 33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038 (image=quay.io/ceph/ceph:v19, name=laughing_snyder, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:35 np0005590810 podman[79746]: 2026-01-21 16:03:35.569556485 +0000 UTC m=+0.229685147 container start 33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038 (image=quay.io/ceph/ceph:v19, name=laughing_snyder, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:03:35 np0005590810 podman[79746]: 2026-01-21 16:03:35.573391855 +0000 UTC m=+0.233520517 container attach 33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038 (image=quay.io/ceph/ceph:v19, name=laughing_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:03:35 np0005590810 podman[79836]: 2026-01-21 16:03:35.750398936 +0000 UTC m=+0.048444644 container create 251b3c96f85a543029daf3b261bf1f940f7f325b12f04b562f2111668c0e0eef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8a9cfd1f8f047b6ee04c5971483f7b9d5519058d441c0e568c1b64adb13043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8a9cfd1f8f047b6ee04c5971483f7b9d5519058d441c0e568c1b64adb13043/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8a9cfd1f8f047b6ee04c5971483f7b9d5519058d441c0e568c1b64adb13043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8a9cfd1f8f047b6ee04c5971483f7b9d5519058d441c0e568c1b64adb13043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:35 np0005590810 podman[79836]: 2026-01-21 16:03:35.726748362 +0000 UTC m=+0.024794160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:03:35 np0005590810 podman[79836]: 2026-01-21 16:03:35.827037184 +0000 UTC m=+0.125082922 container init 251b3c96f85a543029daf3b261bf1f940f7f325b12f04b562f2111668c0e0eef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:35 np0005590810 podman[79836]: 2026-01-21 16:03:35.831654287 +0000 UTC m=+0.129699985 container start 251b3c96f85a543029daf3b261bf1f940f7f325b12f04b562f2111668c0e0eef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:35 np0005590810 bash[79836]: 251b3c96f85a543029daf3b261bf1f940f7f325b12f04b562f2111668c0e0eef
Jan 21 11:03:35 np0005590810 systemd[1]: Started Ceph crash.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:03:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:35 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev da9162c9-6735-4c2f-8a12-2defe979a827 (Updating crash deployment (+1 -> 1))
Jan 21 11:03:35 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event da9162c9-6735-4c2f-8a12-2defe979a827 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:35 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 11:03:35 np0005590810 laughing_snyder[79764]: 
Jan 21 11:03:35 np0005590810 laughing_snyder[79764]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 11:03:35 np0005590810 ceph-mgr[74671]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 21 11:03:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:35 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 21 11:03:35 np0005590810 systemd[1]: libpod-33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038.scope: Deactivated successfully.
Jan 21 11:03:35 np0005590810 podman[79746]: 2026-01-21 16:03:35.983797717 +0000 UTC m=+0.643926379 container died 33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038 (image=quay.io/ceph/ceph:v19, name=laughing_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:03:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: 2026-01-21T16:03:35.995+0000 7f5d96fb8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 21 11:03:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: 2026-01-21T16:03:35.995+0000 7f5d96fb8640 -1 AuthRegistry(0x7f5d90069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 21 11:03:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: 2026-01-21T16:03:35.996+0000 7f5d96fb8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 21 11:03:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: 2026-01-21T16:03:35.996+0000 7f5d96fb8640 -1 AuthRegistry(0x7f5d96fb6ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 21 11:03:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: 2026-01-21T16:03:35.997+0000 7f5d94d2d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 21 11:03:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: 2026-01-21T16:03:35.997+0000 7f5d96fb8640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 21 11:03:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 21 11:03:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 21 11:03:36 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a1ff1827b1beccbb993ab9a219f4b0e80db9e50cdf08eea8bc5e58dc8718c202-merged.mount: Deactivated successfully.
Jan 21 11:03:36 np0005590810 podman[79746]: 2026-01-21 16:03:36.030471975 +0000 UTC m=+0.690600637 container remove 33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038 (image=quay.io/ceph/ceph:v19, name=laughing_snyder, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:36 np0005590810 systemd[1]: libpod-conmon-33ac7bb108fca82cc4f61abaf09e257f60093cbdcf89b3cf2de94daca427c038.scope: Deactivated successfully.
Jan 21 11:03:36 np0005590810 python3[79982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:03:36 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 1 completed events
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:03:36 np0005590810 podman[80016]: 2026-01-21 16:03:36.517053973 +0000 UTC m=+0.049971513 container create 67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e (image=quay.io/ceph/ceph:v19, name=keen_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 systemd[1]: Started libpod-conmon-67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e.scope.
Jan 21 11:03:36 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3f5ad351ef191d56957e5e4fbbfec31eec506b74fe277140566160668744639/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3f5ad351ef191d56957e5e4fbbfec31eec506b74fe277140566160668744639/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3f5ad351ef191d56957e5e4fbbfec31eec506b74fe277140566160668744639/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:36 np0005590810 podman[80016]: 2026-01-21 16:03:36.493798181 +0000 UTC m=+0.026715771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:36 np0005590810 podman[80016]: 2026-01-21 16:03:36.595575189 +0000 UTC m=+0.128492749 container init 67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e (image=quay.io/ceph/ceph:v19, name=keen_maxwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:36 np0005590810 podman[80016]: 2026-01-21 16:03:36.603013649 +0000 UTC m=+0.135931189 container start 67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e (image=quay.io/ceph/ceph:v19, name=keen_maxwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:03:36 np0005590810 podman[80016]: 2026-01-21 16:03:36.606331832 +0000 UTC m=+0.139249392 container attach 67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e (image=quay.io/ceph/ceph:v19, name=keen_maxwell, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:03:36 np0005590810 podman[80071]: 2026-01-21 16:03:36.683120394 +0000 UTC m=+0.059519898 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:03:36 np0005590810 podman[80071]: 2026-01-21 16:03:36.776024397 +0000 UTC m=+0.152423881 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3498175027' entity='client.admin' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 systemd[1]: libpod-67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e.scope: Deactivated successfully.
Jan 21 11:03:36 np0005590810 podman[80016]: 2026-01-21 16:03:36.975108514 +0000 UTC m=+0.508026054 container died 67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e (image=quay.io/ceph/ceph:v19, name=keen_maxwell, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:03:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f3f5ad351ef191d56957e5e4fbbfec31eec506b74fe277140566160668744639-merged.mount: Deactivated successfully.
Jan 21 11:03:37 np0005590810 podman[80016]: 2026-01-21 16:03:37.025049173 +0000 UTC m=+0.557966713 container remove 67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e (image=quay.io/ceph/ceph:v19, name=keen_maxwell, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:37 np0005590810 systemd[1]: libpod-conmon-67438f28abee91adbf9c8c3b7707da66d51c2cf714c9b3f890113d787f9e0c0e.scope: Deactivated successfully.
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 11:03:37 np0005590810 python3[80268]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:03:37 np0005590810 podman[80269]: 2026-01-21 16:03:37.367735405 +0000 UTC m=+0.039156405 container create 12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83 (image=quay.io/ceph/ceph:v19, name=nostalgic_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:37 np0005590810 systemd[1]: Started libpod-conmon-12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83.scope.
Jan 21 11:03:37 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5d3114814466ba35f95fdf48186ee21705ee3138e4cacb379b7f393a47e202/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5d3114814466ba35f95fdf48186ee21705ee3138e4cacb379b7f393a47e202/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5d3114814466ba35f95fdf48186ee21705ee3138e4cacb379b7f393a47e202/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:37 np0005590810 podman[80269]: 2026-01-21 16:03:37.349731116 +0000 UTC m=+0.021152146 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:37 np0005590810 podman[80269]: 2026-01-21 16:03:37.447076466 +0000 UTC m=+0.118497486 container init 12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83 (image=quay.io/ceph/ceph:v19, name=nostalgic_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:37 np0005590810 podman[80269]: 2026-01-21 16:03:37.452374151 +0000 UTC m=+0.123795151 container start 12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83 (image=quay.io/ceph/ceph:v19, name=nostalgic_chandrasekhar, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:03:37 np0005590810 podman[80269]: 2026-01-21 16:03:37.455730575 +0000 UTC m=+0.127151575 container attach 12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83 (image=quay.io/ceph/ceph:v19, name=nostalgic_chandrasekhar, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:37 np0005590810 ansible-async_wrapper.py[78552]: Done in kid B.
Jan 21 11:03:37 np0005590810 podman[80303]: 2026-01-21 16:03:37.490279657 +0000 UTC m=+0.036795873 container create ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead (image=quay.io/ceph/ceph:v19, name=condescending_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:37 np0005590810 systemd[1]: Started libpod-conmon-ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead.scope.
Jan 21 11:03:37 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:37 np0005590810 podman[80303]: 2026-01-21 16:03:37.472935419 +0000 UTC m=+0.019451655 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:37 np0005590810 podman[80303]: 2026-01-21 16:03:37.570390302 +0000 UTC m=+0.116906538 container init ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead (image=quay.io/ceph/ceph:v19, name=condescending_brahmagupta, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:03:37 np0005590810 podman[80303]: 2026-01-21 16:03:37.576022967 +0000 UTC m=+0.122539183 container start ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead (image=quay.io/ceph/ceph:v19, name=condescending_brahmagupta, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 11:03:37 np0005590810 condescending_brahmagupta[80322]: 167 167
Jan 21 11:03:37 np0005590810 systemd[1]: libpod-ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead.scope: Deactivated successfully.
Jan 21 11:03:37 np0005590810 podman[80303]: 2026-01-21 16:03:37.579403012 +0000 UTC m=+0.125919428 container attach ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead (image=quay.io/ceph/ceph:v19, name=condescending_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:37 np0005590810 podman[80303]: 2026-01-21 16:03:37.580436995 +0000 UTC m=+0.126953211 container died ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead (image=quay.io/ceph/ceph:v19, name=condescending_brahmagupta, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:37 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d1503910b655118fa21f97ce554ed848ced497ef1cce955cdc86f8a86f3bd376-merged.mount: Deactivated successfully.
Jan 21 11:03:37 np0005590810 podman[80303]: 2026-01-21 16:03:37.619268909 +0000 UTC m=+0.165785125 container remove ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead (image=quay.io/ceph/ceph:v19, name=condescending_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Jan 21 11:03:37 np0005590810 systemd[1]: libpod-conmon-ddde57fbfef3e9e640fe9dc1624345b99d634bf5cbdfc0010ab411c3cb820ead.scope: Deactivated successfully.
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ygffhs (unknown last config time)...
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ygffhs (unknown last config time)...
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ygffhs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ygffhs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ygffhs on compute-0
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ygffhs on compute-0
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1763332735' entity='client.admin' 
Jan 21 11:03:37 np0005590810 systemd[1]: libpod-12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83.scope: Deactivated successfully.
Jan 21 11:03:37 np0005590810 podman[80269]: 2026-01-21 16:03:37.83107706 +0000 UTC m=+0.502498060 container died 12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83 (image=quay.io/ceph/ceph:v19, name=nostalgic_chandrasekhar, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 21 11:03:37 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8f5d3114814466ba35f95fdf48186ee21705ee3138e4cacb379b7f393a47e202-merged.mount: Deactivated successfully.
Jan 21 11:03:37 np0005590810 podman[80269]: 2026-01-21 16:03:37.878514342 +0000 UTC m=+0.549935342 container remove 12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83 (image=quay.io/ceph/ceph:v19, name=nostalgic_chandrasekhar, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 21 11:03:37 np0005590810 systemd[1]: libpod-conmon-12fe86552cd38f173d3127fb5186b389338a86cd87ad1a038cb7668f9b7bff83.scope: Deactivated successfully.
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3498175027' entity='client.admin' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ygffhs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:03:37 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1763332735' entity='client.admin' 
Jan 21 11:03:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:38 np0005590810 podman[80460]: 2026-01-21 16:03:38.123622787 +0000 UTC m=+0.043901543 container create b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317 (image=quay.io/ceph/ceph:v19, name=nervous_ride, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:03:38 np0005590810 systemd[1]: Started libpod-conmon-b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317.scope.
Jan 21 11:03:38 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:38 np0005590810 podman[80460]: 2026-01-21 16:03:38.192412781 +0000 UTC m=+0.112691597 container init b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317 (image=quay.io/ceph/ceph:v19, name=nervous_ride, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:38 np0005590810 podman[80460]: 2026-01-21 16:03:38.104293778 +0000 UTC m=+0.024572534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:38 np0005590810 podman[80460]: 2026-01-21 16:03:38.200490702 +0000 UTC m=+0.120769458 container start b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317 (image=quay.io/ceph/ceph:v19, name=nervous_ride, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:38 np0005590810 podman[80460]: 2026-01-21 16:03:38.203725772 +0000 UTC m=+0.124004608 container attach b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317 (image=quay.io/ceph/ceph:v19, name=nervous_ride, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:38 np0005590810 nervous_ride[80478]: 167 167
Jan 21 11:03:38 np0005590810 systemd[1]: libpod-b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317.scope: Deactivated successfully.
Jan 21 11:03:38 np0005590810 podman[80460]: 2026-01-21 16:03:38.205666272 +0000 UTC m=+0.125945028 container died b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317 (image=quay.io/ceph/ceph:v19, name=nervous_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:38 np0005590810 podman[80460]: 2026-01-21 16:03:38.243079793 +0000 UTC m=+0.163358549 container remove b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317 (image=quay.io/ceph/ceph:v19, name=nervous_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:38 np0005590810 python3[80468]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:03:38 np0005590810 systemd[1]: libpod-conmon-b0a99ab3aba936d933d0305d830f91e3cac1b3f667ffc7dd3735a3664a032317.scope: Deactivated successfully.
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:03:38 np0005590810 podman[80495]: 2026-01-21 16:03:38.312406744 +0000 UTC m=+0.048349110 container create 4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce (image=quay.io/ceph/ceph:v19, name=youthful_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:38 np0005590810 systemd[1]: Started libpod-conmon-4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce.scope.
Jan 21 11:03:38 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:38 np0005590810 systemd[1]: var-lib-containers-storage-overlay-85e40617919065423bf8002e9b0ea17bf51265fa8944655fb41e73fc5fc7ffb5-merged.mount: Deactivated successfully.
Jan 21 11:03:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8916c8c8514e20bbeaeb28820b396b4581331faeefb9b53b8233eaae64fb4b5d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8916c8c8514e20bbeaeb28820b396b4581331faeefb9b53b8233eaae64fb4b5d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8916c8c8514e20bbeaeb28820b396b4581331faeefb9b53b8233eaae64fb4b5d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:38 np0005590810 podman[80495]: 2026-01-21 16:03:38.290890827 +0000 UTC m=+0.026833223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:38 np0005590810 podman[80495]: 2026-01-21 16:03:38.389027372 +0000 UTC m=+0.124969758 container init 4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce (image=quay.io/ceph/ceph:v19, name=youthful_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:03:38 np0005590810 podman[80495]: 2026-01-21 16:03:38.396675259 +0000 UTC m=+0.132617625 container start 4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce (image=quay.io/ceph/ceph:v19, name=youthful_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:03:38 np0005590810 podman[80495]: 2026-01-21 16:03:38.40057412 +0000 UTC m=+0.136516506 container attach 4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce (image=quay.io/ceph/ceph:v19, name=youthful_saha, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1613414636' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: Reconfiguring mgr.compute-0.ygffhs (unknown last config time)...
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: Reconfiguring daemon mgr.compute-0.ygffhs on compute-0
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:38 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1613414636' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 21 11:03:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 21 11:03:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:03:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1613414636' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 21 11:03:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 21 11:03:39 np0005590810 youthful_saha[80523]: set require_min_compat_client to mimic
Jan 21 11:03:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 21 11:03:39 np0005590810 systemd[1]: libpod-4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce.scope: Deactivated successfully.
Jan 21 11:03:39 np0005590810 podman[80495]: 2026-01-21 16:03:39.346178228 +0000 UTC m=+1.082120584 container died 4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce (image=quay.io/ceph/ceph:v19, name=youthful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 11:03:39 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8916c8c8514e20bbeaeb28820b396b4581331faeefb9b53b8233eaae64fb4b5d-merged.mount: Deactivated successfully.
Jan 21 11:03:39 np0005590810 podman[80495]: 2026-01-21 16:03:39.381084172 +0000 UTC m=+1.117026548 container remove 4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce (image=quay.io/ceph/ceph:v19, name=youthful_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:39 np0005590810 systemd[1]: libpod-conmon-4a5035c9d7aa067d91689407f54c61bd8d795eb1d816795ad101e1a886cce9ce.scope: Deactivated successfully.
Jan 21 11:03:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:40 np0005590810 python3[80596]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:03:40 np0005590810 podman[80597]: 2026-01-21 16:03:40.064749842 +0000 UTC m=+0.036880165 container create 6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04 (image=quay.io/ceph/ceph:v19, name=vigorous_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:03:40 np0005590810 systemd[1]: Started libpod-conmon-6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04.scope.
Jan 21 11:03:40 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97ac9c972e04e0c2df02ceabb4bed62da1c98c0ff16934f3542d49b5ac20b28/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97ac9c972e04e0c2df02ceabb4bed62da1c98c0ff16934f3542d49b5ac20b28/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97ac9c972e04e0c2df02ceabb4bed62da1c98c0ff16934f3542d49b5ac20b28/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:40 np0005590810 podman[80597]: 2026-01-21 16:03:40.136488868 +0000 UTC m=+0.108619211 container init 6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04 (image=quay.io/ceph/ceph:v19, name=vigorous_kalam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:40 np0005590810 podman[80597]: 2026-01-21 16:03:40.141620137 +0000 UTC m=+0.113750460 container start 6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04 (image=quay.io/ceph/ceph:v19, name=vigorous_kalam, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:03:40 np0005590810 podman[80597]: 2026-01-21 16:03:40.144197787 +0000 UTC m=+0.116328130 container attach 6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04 (image=quay.io/ceph/ceph:v19, name=vigorous_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:03:40 np0005590810 podman[80597]: 2026-01-21 16:03:40.049345955 +0000 UTC m=+0.021476298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1613414636' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 21 11:03:40 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:40 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Added host compute-0
Jan 21 11:03:40 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:03:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:41 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:41 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:41 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:41 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:41 np0005590810 ceph-mon[74380]: Added host compute-0
Jan 21 11:03:41 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:03:41 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:42 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 21 11:03:42 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 21 11:03:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:43 np0005590810 ceph-mon[74380]: Deploying cephadm binary to compute-1
Jan 21 11:03:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 11:03:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:46 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Added host compute-1
Jan 21 11:03:46 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 21 11:03:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:03:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:03:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:03:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:03:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:03:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:03:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:03:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:03:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:47 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:47 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:47 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 21 11:03:47 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 21 11:03:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:48 np0005590810 ceph-mon[74380]: Added host compute-1
Jan 21 11:03:48 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:48 np0005590810 ceph-mon[74380]: Deploying cephadm binary to compute-2
Jan 21 11:03:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:03:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:49 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 11:03:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:50 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Added host compute-2
Jan 21 11:03:50 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 21 11:03:50 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:50 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 11:03:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:50 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:50 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:51 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 11:03:51 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 11:03:51 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 21 11:03:51 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 21 11:03:51 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:51 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:51 np0005590810 vigorous_kalam[80613]: Added host 'compute-0' with addr '192.168.122.100'
Jan 21 11:03:51 np0005590810 vigorous_kalam[80613]: Added host 'compute-1' with addr '192.168.122.101'
Jan 21 11:03:51 np0005590810 vigorous_kalam[80613]: Added host 'compute-2' with addr '192.168.122.102'
Jan 21 11:03:51 np0005590810 vigorous_kalam[80613]: Scheduled mon update...
Jan 21 11:03:51 np0005590810 vigorous_kalam[80613]: Scheduled mgr update...
Jan 21 11:03:51 np0005590810 vigorous_kalam[80613]: Scheduled osd.default_drive_group update...
Jan 21 11:03:51 np0005590810 systemd[1]: libpod-6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04.scope: Deactivated successfully.
Jan 21 11:03:51 np0005590810 podman[80597]: 2026-01-21 16:03:51.033665042 +0000 UTC m=+11.005795365 container died 6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04 (image=quay.io/ceph/ceph:v19, name=vigorous_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:03:51 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a97ac9c972e04e0c2df02ceabb4bed62da1c98c0ff16934f3542d49b5ac20b28-merged.mount: Deactivated successfully.
Jan 21 11:03:51 np0005590810 podman[80597]: 2026-01-21 16:03:51.073564721 +0000 UTC m=+11.045695044 container remove 6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04 (image=quay.io/ceph/ceph:v19, name=vigorous_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:03:51 np0005590810 systemd[1]: libpod-conmon-6eb2f6eeefc34e5740657c867b07dbcdd9e229e6de4d3eb49c64f21ca1b3be04.scope: Deactivated successfully.
Jan 21 11:03:51 np0005590810 python3[80769]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:03:51 np0005590810 podman[80771]: 2026-01-21 16:03:51.583815542 +0000 UTC m=+0.044991457 container create 695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 21 11:03:51 np0005590810 systemd[1]: Started libpod-conmon-695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a.scope.
Jan 21 11:03:51 np0005590810 podman[80771]: 2026-01-21 16:03:51.559461776 +0000 UTC m=+0.020637701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:03:51 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:03:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b5e4477c6ab55bc8c68a0d50b31ee546a96cfc5fe17aa4cb01979e5b2a037f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b5e4477c6ab55bc8c68a0d50b31ee546a96cfc5fe17aa4cb01979e5b2a037f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b5e4477c6ab55bc8c68a0d50b31ee546a96cfc5fe17aa4cb01979e5b2a037f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:03:51 np0005590810 podman[80771]: 2026-01-21 16:03:51.692026099 +0000 UTC m=+0.153202014 container init 695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:03:51 np0005590810 podman[80771]: 2026-01-21 16:03:51.699360936 +0000 UTC m=+0.160536841 container start 695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:03:51 np0005590810 podman[80771]: 2026-01-21 16:03:51.70238497 +0000 UTC m=+0.163560895 container attach 695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 21 11:03:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: Added host compute-2
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 21 11:03:51 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:03:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 11:03:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145700396' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 11:03:52 np0005590810 upbeat_rosalind[80787]: 
Jan 21 11:03:52 np0005590810 upbeat_rosalind[80787]: {"fsid":"d9745984-fea8-5195-8ec5-61f685b5c785","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":60,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-21T16:02:49:724869+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-21T16:02:49.728816+0000","services":{}},"progress_events":{}}
Jan 21 11:03:52 np0005590810 systemd[1]: libpod-695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a.scope: Deactivated successfully.
Jan 21 11:03:52 np0005590810 podman[80771]: 2026-01-21 16:03:52.161616458 +0000 UTC m=+0.622792363 container died 695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:03:52 np0005590810 systemd[1]: var-lib-containers-storage-overlay-06b5e4477c6ab55bc8c68a0d50b31ee546a96cfc5fe17aa4cb01979e5b2a037f-merged.mount: Deactivated successfully.
Jan 21 11:03:52 np0005590810 podman[80771]: 2026-01-21 16:03:52.198283386 +0000 UTC m=+0.659459291 container remove 695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:03:52 np0005590810 systemd[1]: libpod-conmon-695b39712245756d1a0b4549ef5393c8baaaeacc6c14a5820c6843786a426f4a.scope: Deactivated successfully.
Jan 21 11:03:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:03:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:03:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:04:15
Jan 21 11:04:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:04:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:04:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] No pools available
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:04:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:04:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:04:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:04:21 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:04:21 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:04:21 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:04:21 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:04:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:04:22 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:04:22 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:04:22 np0005590810 python3[80851]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:04:22 np0005590810 podman[80853]: 2026-01-21 16:04:22.476697479 +0000 UTC m=+0.038985446 container create b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208 (image=quay.io/ceph/ceph:v19, name=hungry_solomon, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:22 np0005590810 systemd[1]: Started libpod-conmon-b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208.scope.
Jan 21 11:04:22 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:22 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a96802d5172db963a22a8cf6b83271a1eaa9b4a390d8c68a34562f031d209ec/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:22 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a96802d5172db963a22a8cf6b83271a1eaa9b4a390d8c68a34562f031d209ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:22 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a96802d5172db963a22a8cf6b83271a1eaa9b4a390d8c68a34562f031d209ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:22 np0005590810 podman[80853]: 2026-01-21 16:04:22.533890482 +0000 UTC m=+0.096178449 container init b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208 (image=quay.io/ceph/ceph:v19, name=hungry_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:22 np0005590810 podman[80853]: 2026-01-21 16:04:22.538056868 +0000 UTC m=+0.100344845 container start b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208 (image=quay.io/ceph/ceph:v19, name=hungry_solomon, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 21 11:04:22 np0005590810 podman[80853]: 2026-01-21 16:04:22.540712495 +0000 UTC m=+0.103000472 container attach b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208 (image=quay.io/ceph/ceph:v19, name=hungry_solomon, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 11:04:22 np0005590810 podman[80853]: 2026-01-21 16:04:22.458468269 +0000 UTC m=+0.020756266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:22 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:04:22 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 11:04:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778081490' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 11:04:22 np0005590810 hungry_solomon[80869]: 
Jan 21 11:04:22 np0005590810 hungry_solomon[80869]: {"fsid":"d9745984-fea8-5195-8ec5-61f685b5c785","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":91,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-21T16:02:49:724869+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T16:04:17.975788+0000","services":{}},"progress_events":{}}
Jan 21 11:04:22 np0005590810 systemd[1]: libpod-b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208.scope: Deactivated successfully.
Jan 21 11:04:22 np0005590810 podman[80853]: 2026-01-21 16:04:22.977631609 +0000 UTC m=+0.539919586 container died b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208 (image=quay.io/ceph/ceph:v19, name=hungry_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:04:22 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8a96802d5172db963a22a8cf6b83271a1eaa9b4a390d8c68a34562f031d209ec-merged.mount: Deactivated successfully.
Jan 21 11:04:23 np0005590810 podman[80853]: 2026-01-21 16:04:23.010926601 +0000 UTC m=+0.573214578 container remove b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208 (image=quay.io/ceph/ceph:v19, name=hungry_solomon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:04:23 np0005590810 systemd[1]: libpod-conmon-b6836f04a0374da68c72e001eccdaba7cfe5bbe6ac6ebadb6656133ac9843208.scope: Deactivated successfully.
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev f731e2c7-7400-41ed-ba78-e131c9e63441 (Updating crash deployment (+1 -> 2))
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:04:23.537+0000 7fdaf6eb7640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: service_name: mon
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: placement:
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  hosts:
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  - compute-0
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  - compute-1
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  - compute-2
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:04:23.538+0000 7fdaf6eb7640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: service_name: mgr
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: placement:
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  hosts:
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  - compute-0
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  - compute-1
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  - compute-2
Jan 21 11:04:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:04:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 21 11:04:23 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 21 11:04:24 np0005590810 ceph-mon[74380]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:04:24 np0005590810 ceph-mon[74380]: Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:04:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:04:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 11:04:24 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 21 11:04:25 np0005590810 ceph-mon[74380]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 21 11:04:25 np0005590810 ceph-mon[74380]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 21 11:04:25 np0005590810 ceph-mon[74380]: Deploying daemon crash.compute-1 on compute-1
Jan 21 11:04:25 np0005590810 ceph-mon[74380]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 21 11:04:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev f731e2c7-7400-41ed-ba78-e131c9e63441 (Updating crash deployment (+1 -> 2))
Jan 21 11:04:26 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event f731e2c7-7400-41ed-ba78-e131c9e63441 (Updating crash deployment (+1 -> 2)) in 3 seconds
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:04:26 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 2 completed events
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:04:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:26 np0005590810 podman[80994]: 2026-01-21 16:04:26.849634108 +0000 UTC m=+0.025852381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:26 np0005590810 podman[80994]: 2026-01-21 16:04:26.956621745 +0000 UTC m=+0.132839998 container create 61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:27 np0005590810 systemd[1]: Started libpod-conmon-61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b.scope.
Jan 21 11:04:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:27 np0005590810 podman[80994]: 2026-01-21 16:04:27.04594837 +0000 UTC m=+0.222166643 container init 61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:27 np0005590810 podman[80994]: 2026-01-21 16:04:27.052149523 +0000 UTC m=+0.228367776 container start 61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:27 np0005590810 podman[80994]: 2026-01-21 16:04:27.055859439 +0000 UTC m=+0.232077692 container attach 61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_taussig, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:04:27 np0005590810 nice_taussig[81010]: 167 167
Jan 21 11:04:27 np0005590810 systemd[1]: libpod-61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b.scope: Deactivated successfully.
Jan 21 11:04:27 np0005590810 podman[80994]: 2026-01-21 16:04:27.057933357 +0000 UTC m=+0.234151630 container died 61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 11:04:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-317440fce37161602485573ee0d41ca1e7e2e3d78c8e9e72a22081857fabd8c2-merged.mount: Deactivated successfully.
Jan 21 11:04:27 np0005590810 podman[80994]: 2026-01-21 16:04:27.095987043 +0000 UTC m=+0.272205296 container remove 61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 11:04:27 np0005590810 systemd[1]: libpod-conmon-61a4183ef0e721dbf6da9bf1e89870085f9fbd6ce0d06fa6f17b59156493738b.scope: Deactivated successfully.
Jan 21 11:04:27 np0005590810 podman[81034]: 2026-01-21 16:04:27.274002717 +0000 UTC m=+0.060488311 container create ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 21 11:04:27 np0005590810 systemd[1]: Started libpod-conmon-ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65.scope.
Jan 21 11:04:27 np0005590810 podman[81034]: 2026-01-21 16:04:27.243687992 +0000 UTC m=+0.030173676 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4de39b2df81e8c9b96a1338348d328744d3f3f3924235fa6a318a9dc043af5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4de39b2df81e8c9b96a1338348d328744d3f3f3924235fa6a318a9dc043af5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4de39b2df81e8c9b96a1338348d328744d3f3f3924235fa6a318a9dc043af5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4de39b2df81e8c9b96a1338348d328744d3f3f3924235fa6a318a9dc043af5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4de39b2df81e8c9b96a1338348d328744d3f3f3924235fa6a318a9dc043af5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:27 np0005590810 podman[81034]: 2026-01-21 16:04:27.363576003 +0000 UTC m=+0.150061647 container init ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jones, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:27 np0005590810 podman[81034]: 2026-01-21 16:04:27.372037603 +0000 UTC m=+0.158523197 container start ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:27 np0005590810 podman[81034]: 2026-01-21 16:04:27.376400038 +0000 UTC m=+0.162885682 container attach ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jones, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 11:04:27 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:27 np0005590810 musing_jones[81050]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:04:27 np0005590810 musing_jones[81050]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:27 np0005590810 musing_jones[81050]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:27 np0005590810 musing_jones[81050]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 63a44247-c214-4217-a027-13e89fae6b3d
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "63a44247-c214-4217-a027-13e89fae6b3d"} v 0)
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2515948861' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "63a44247-c214-4217-a027-13e89fae6b3d"}]: dispatch
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2515948861' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "63a44247-c214-4217-a027-13e89fae6b3d"}]': finished
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:28 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:28 np0005590810 musing_jones[81050]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 21 11:04:28 np0005590810 lvm[81111]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:04:28 np0005590810 lvm[81111]: VG ceph_vg0 finished
Jan 21 11:04:28 np0005590810 musing_jones[81050]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 21 11:04:28 np0005590810 musing_jones[81050]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 11:04:28 np0005590810 musing_jones[81050]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:28 np0005590810 musing_jones[81050]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2515948861' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "63a44247-c214-4217-a027-13e89fae6b3d"}]: dispatch
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2515948861' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "63a44247-c214-4217-a027-13e89fae6b3d"}]': finished
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576415701' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 21 11:04:28 np0005590810 musing_jones[81050]: stderr: got monmap epoch 1
Jan 21 11:04:28 np0005590810 musing_jones[81050]: --> Creating keyring file for osd.0
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "910dca59-f9f1-45ae-8a3f-60c7331732b8"} v 0)
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1473003640' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "910dca59-f9f1-45ae-8a3f-60c7331732b8"}]: dispatch
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:04:28 np0005590810 musing_jones[81050]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 21 11:04:28 np0005590810 musing_jones[81050]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 21 11:04:28 np0005590810 musing_jones[81050]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 63a44247-c214-4217-a027-13e89fae6b3d --setuser ceph --setgroup ceph
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1473003640' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "910dca59-f9f1-45ae-8a3f-60c7331732b8"}]': finished
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:28 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:28 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 11:04:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 21 11:04:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1032341735' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 21 11:04:29 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.101:0/1473003640' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "910dca59-f9f1-45ae-8a3f-60c7331732b8"}]: dispatch
Jan 21 11:04:29 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.101:0/1473003640' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "910dca59-f9f1-45ae-8a3f-60c7331732b8"}]': finished
Jan 21 11:04:29 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 21 11:04:30 np0005590810 ceph-mon[74380]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 21 11:04:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:32 np0005590810 musing_jones[81050]: stderr: 2026-01-21T16:04:28.888+0000 7f832c7f5740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 21 11:04:32 np0005590810 musing_jones[81050]: stderr: 2026-01-21T16:04:29.151+0000 7f832c7f5740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 21 11:04:32 np0005590810 musing_jones[81050]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 21 11:04:32 np0005590810 musing_jones[81050]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 11:04:32 np0005590810 musing_jones[81050]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 11:04:33 np0005590810 musing_jones[81050]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:33 np0005590810 musing_jones[81050]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:33 np0005590810 musing_jones[81050]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 11:04:33 np0005590810 musing_jones[81050]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 11:04:33 np0005590810 musing_jones[81050]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 11:04:33 np0005590810 musing_jones[81050]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 21 11:04:33 np0005590810 systemd[1]: libpod-ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65.scope: Deactivated successfully.
Jan 21 11:04:33 np0005590810 systemd[1]: libpod-ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65.scope: Consumed 2.010s CPU time.
Jan 21 11:04:33 np0005590810 conmon[81050]: conmon ad864356a2be5c1ee8c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65.scope/container/memory.events
Jan 21 11:04:33 np0005590810 podman[81034]: 2026-01-21 16:04:33.323669936 +0000 UTC m=+6.110155540 container died ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f4de39b2df81e8c9b96a1338348d328744d3f3f3924235fa6a318a9dc043af5c-merged.mount: Deactivated successfully.
Jan 21 11:04:33 np0005590810 podman[81034]: 2026-01-21 16:04:33.376927144 +0000 UTC m=+6.163412738 container remove ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jones, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:04:33 np0005590810 systemd[1]: libpod-conmon-ad864356a2be5c1ee8c6d02d7ac7ca6e20ddbaa84b5bfb4f2fce57e07803cd65.scope: Deactivated successfully.
Jan 21 11:04:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:33 np0005590810 podman[82138]: 2026-01-21 16:04:33.86454618 +0000 UTC m=+0.020913887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:34 np0005590810 podman[82138]: 2026-01-21 16:04:34.00267062 +0000 UTC m=+0.159038307 container create 4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:34 np0005590810 systemd[1]: Started libpod-conmon-4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11.scope.
Jan 21 11:04:34 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:34 np0005590810 podman[82138]: 2026-01-21 16:04:34.087975182 +0000 UTC m=+0.244342879 container init 4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:04:34 np0005590810 podman[82138]: 2026-01-21 16:04:34.096496732 +0000 UTC m=+0.252864429 container start 4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:04:34 np0005590810 podman[82138]: 2026-01-21 16:04:34.100084828 +0000 UTC m=+0.256452515 container attach 4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:04:34 np0005590810 pensive_hodgkin[82155]: 167 167
Jan 21 11:04:34 np0005590810 systemd[1]: libpod-4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11.scope: Deactivated successfully.
Jan 21 11:04:34 np0005590810 podman[82138]: 2026-01-21 16:04:34.103002115 +0000 UTC m=+0.259369802 container died 4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:34 np0005590810 systemd[1]: var-lib-containers-storage-overlay-93096cb39c44f83ceee17403471263d6c545016acc20620149f9128aa54a327b-merged.mount: Deactivated successfully.
Jan 21 11:04:34 np0005590810 podman[82138]: 2026-01-21 16:04:34.14098933 +0000 UTC m=+0.297357017 container remove 4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hodgkin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 21 11:04:34 np0005590810 systemd[1]: libpod-conmon-4386ad4ac0a68cf6c4920cf838052fb14283389e4bf2d87496d521f0027b5f11.scope: Deactivated successfully.
Jan 21 11:04:34 np0005590810 podman[82178]: 2026-01-21 16:04:34.289457419 +0000 UTC m=+0.044399990 container create fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:04:34 np0005590810 systemd[1]: Started libpod-conmon-fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409.scope.
Jan 21 11:04:34 np0005590810 podman[82178]: 2026-01-21 16:04:34.270463541 +0000 UTC m=+0.025406132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:34 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b500715df956e2e39bf1a93842168817ab3ea6ca41be4e286ac1e1ddcd6215/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b500715df956e2e39bf1a93842168817ab3ea6ca41be4e286ac1e1ddcd6215/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b500715df956e2e39bf1a93842168817ab3ea6ca41be4e286ac1e1ddcd6215/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b500715df956e2e39bf1a93842168817ab3ea6ca41be4e286ac1e1ddcd6215/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 21 11:04:34 np0005590810 podman[82178]: 2026-01-21 16:04:34.385836597 +0000 UTC m=+0.140779168 container init fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:04:34 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 21 11:04:34 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 21 11:04:34 np0005590810 podman[82178]: 2026-01-21 16:04:34.400879659 +0000 UTC m=+0.155822230 container start fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:34 np0005590810 podman[82178]: 2026-01-21 16:04:34.403966676 +0000 UTC m=+0.158909257 container attach fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ellis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]: {
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:    "0": [
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:        {
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "devices": [
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "/dev/loop3"
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            ],
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "lv_name": "ceph_lv0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "lv_size": "21470642176",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "name": "ceph_lv0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "tags": {
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.cluster_name": "ceph",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.crush_device_class": "",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.encrypted": "0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.osd_id": "0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.type": "block",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.vdo": "0",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:                "ceph.with_tpm": "0"
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            },
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "type": "block",
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:            "vg_name": "ceph_vg0"
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:        }
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]:    ]
Jan 21 11:04:34 np0005590810 friendly_ellis[82194]: }
Jan 21 11:04:34 np0005590810 systemd[1]: libpod-fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409.scope: Deactivated successfully.
Jan 21 11:04:34 np0005590810 podman[82178]: 2026-01-21 16:04:34.699963043 +0000 UTC m=+0.454905624 container died fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ellis, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:04:34 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f6b500715df956e2e39bf1a93842168817ab3ea6ca41be4e286ac1e1ddcd6215-merged.mount: Deactivated successfully.
Jan 21 11:04:34 np0005590810 podman[82178]: 2026-01-21 16:04:34.73809375 +0000 UTC m=+0.493036321 container remove fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ellis, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 11:04:34 np0005590810 systemd[1]: libpod-conmon-fc1063a3d71ee90dc46d74d2a32e541832e3468c234571007b0421136d928409.scope: Deactivated successfully.
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:04:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:04:34 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 21 11:04:34 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 21 11:04:35 np0005590810 podman[82305]: 2026-01-21 16:04:35.250843416 +0000 UTC m=+0.032778443 container create 02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_bose, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:04:35 np0005590810 systemd[1]: Started libpod-conmon-02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc.scope.
Jan 21 11:04:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:35 np0005590810 podman[82305]: 2026-01-21 16:04:35.314759712 +0000 UTC m=+0.096694769 container init 02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_bose, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:04:35 np0005590810 podman[82305]: 2026-01-21 16:04:35.320959835 +0000 UTC m=+0.102894872 container start 02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_bose, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:04:35 np0005590810 podman[82305]: 2026-01-21 16:04:35.324316481 +0000 UTC m=+0.106251528 container attach 02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_bose, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:04:35 np0005590810 funny_bose[82321]: 167 167
Jan 21 11:04:35 np0005590810 systemd[1]: libpod-02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc.scope: Deactivated successfully.
Jan 21 11:04:35 np0005590810 podman[82305]: 2026-01-21 16:04:35.326291209 +0000 UTC m=+0.108226256 container died 02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:04:35 np0005590810 podman[82305]: 2026-01-21 16:04:35.237008982 +0000 UTC m=+0.018944049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:35 np0005590810 systemd[1]: var-lib-containers-storage-overlay-32142294eda44e8a69d1c4e1b64cf3056ed27027cc5ddde9462dfbc46f317df3-merged.mount: Deactivated successfully.
Jan 21 11:04:35 np0005590810 podman[82305]: 2026-01-21 16:04:35.362571957 +0000 UTC m=+0.144506994 container remove 02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:04:35 np0005590810 systemd[1]: libpod-conmon-02594b1f828392a5c18bff80da9b6498bb537a6381ee41b06b0e48cbf6707edc.scope: Deactivated successfully.
Jan 21 11:04:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:35 np0005590810 podman[82349]: 2026-01-21 16:04:35.580507815 +0000 UTC m=+0.034208632 container create 70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:35 np0005590810 ceph-mon[74380]: Deploying daemon osd.1 on compute-1
Jan 21 11:04:35 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 21 11:04:35 np0005590810 ceph-mon[74380]: Deploying daemon osd.0 on compute-0
Jan 21 11:04:35 np0005590810 systemd[1]: Started libpod-conmon-70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40.scope.
Jan 21 11:04:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5bb4757cd893a72d27653cdc2f2ae36636f941c262a3564167d0ae6b13b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5bb4757cd893a72d27653cdc2f2ae36636f941c262a3564167d0ae6b13b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5bb4757cd893a72d27653cdc2f2ae36636f941c262a3564167d0ae6b13b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5bb4757cd893a72d27653cdc2f2ae36636f941c262a3564167d0ae6b13b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed5bb4757cd893a72d27653cdc2f2ae36636f941c262a3564167d0ae6b13b04/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:35 np0005590810 podman[82349]: 2026-01-21 16:04:35.565776301 +0000 UTC m=+0.019477148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:35 np0005590810 podman[82349]: 2026-01-21 16:04:35.668186983 +0000 UTC m=+0.121887800 container init 70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:04:35 np0005590810 podman[82349]: 2026-01-21 16:04:35.683713165 +0000 UTC m=+0.137414002 container start 70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 21 11:04:35 np0005590810 podman[82349]: 2026-01-21 16:04:35.688165619 +0000 UTC m=+0.141866446 container attach 70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 21 11:04:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test[82365]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 21 11:04:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test[82365]:                            [--no-systemd] [--no-tmpfs]
Jan 21 11:04:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test[82365]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 21 11:04:35 np0005590810 systemd[1]: libpod-70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40.scope: Deactivated successfully.
Jan 21 11:04:35 np0005590810 podman[82349]: 2026-01-21 16:04:35.867576832 +0000 UTC m=+0.321277659 container died 70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:04:35 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9ed5bb4757cd893a72d27653cdc2f2ae36636f941c262a3564167d0ae6b13b04-merged.mount: Deactivated successfully.
Jan 21 11:04:35 np0005590810 podman[82349]: 2026-01-21 16:04:35.937763881 +0000 UTC m=+0.391464698 container remove 70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate-test, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:04:35 np0005590810 systemd[1]: libpod-conmon-70cd965b45540015716806bf0b2754ac7cddff6c13015a4698c1ca5264304b40.scope: Deactivated successfully.
Jan 21 11:04:36 np0005590810 systemd[1]: Reloading.
Jan 21 11:04:36 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:04:36 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:04:36 np0005590810 systemd[1]: Reloading.
Jan 21 11:04:36 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:04:36 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:04:36 np0005590810 systemd[1]: Starting Ceph osd.0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:04:36 np0005590810 podman[82524]: 2026-01-21 16:04:36.9060121 +0000 UTC m=+0.037744167 container create 1b78a3556486e8ab870545d7034966cc5a56c1750c9b6cef17683033df4499ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:36 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfa2272c931aac0c5a98a48b65b3b827942632334231a34ee79f0be01defe50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfa2272c931aac0c5a98a48b65b3b827942632334231a34ee79f0be01defe50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfa2272c931aac0c5a98a48b65b3b827942632334231a34ee79f0be01defe50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfa2272c931aac0c5a98a48b65b3b827942632334231a34ee79f0be01defe50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfa2272c931aac0c5a98a48b65b3b827942632334231a34ee79f0be01defe50/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:36 np0005590810 podman[82524]: 2026-01-21 16:04:36.889963479 +0000 UTC m=+0.021695566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:37 np0005590810 podman[82524]: 2026-01-21 16:04:37.156125732 +0000 UTC m=+0.287857809 container init 1b78a3556486e8ab870545d7034966cc5a56c1750c9b6cef17683033df4499ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:37 np0005590810 podman[82524]: 2026-01-21 16:04:37.163291263 +0000 UTC m=+0.295023330 container start 1b78a3556486e8ab870545d7034966cc5a56c1750c9b6cef17683033df4499ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 11:04:37 np0005590810 podman[82524]: 2026-01-21 16:04:37.304830209 +0000 UTC m=+0.436562306 container attach 1b78a3556486e8ab870545d7034966cc5a56c1750c9b6cef17683033df4499ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:04:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:37 np0005590810 bash[82524]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:37 np0005590810 bash[82524]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:37 np0005590810 lvm[82621]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:04:37 np0005590810 lvm[82621]: VG ceph_vg0 finished
Jan 21 11:04:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 11:04:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:37 np0005590810 bash[82524]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 11:04:37 np0005590810 bash[82524]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:37 np0005590810 bash[82524]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 11:04:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 11:04:38 np0005590810 bash[82524]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 11:04:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 11:04:38 np0005590810 bash[82524]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 11:04:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:38 np0005590810 bash[82524]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:38 np0005590810 bash[82524]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 11:04:38 np0005590810 bash[82524]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 11:04:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 11:04:38 np0005590810 bash[82524]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 11:04:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate[82540]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 11:04:38 np0005590810 bash[82524]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 11:04:38 np0005590810 systemd[1]: libpod-1b78a3556486e8ab870545d7034966cc5a56c1750c9b6cef17683033df4499ab.scope: Deactivated successfully.
Jan 21 11:04:38 np0005590810 systemd[1]: libpod-1b78a3556486e8ab870545d7034966cc5a56c1750c9b6cef17683033df4499ab.scope: Consumed 1.380s CPU time.
Jan 21 11:04:38 np0005590810 podman[82716]: 2026-01-21 16:04:38.47053206 +0000 UTC m=+0.027536679 container died 1b78a3556486e8ab870545d7034966cc5a56c1750c9b6cef17683033df4499ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Jan 21 11:04:38 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5cfa2272c931aac0c5a98a48b65b3b827942632334231a34ee79f0be01defe50-merged.mount: Deactivated successfully.
Jan 21 11:04:38 np0005590810 podman[82716]: 2026-01-21 16:04:38.517427316 +0000 UTC m=+0.074431915 container remove 1b78a3556486e8ab870545d7034966cc5a56c1750c9b6cef17683033df4499ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 21 11:04:38 np0005590810 podman[82774]: 2026-01-21 16:04:38.723658777 +0000 UTC m=+0.045883138 container create 9c20b5361e265e9438fe0d12138e4954fb9a2c0dcffb45024009ec084d03d956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 21 11:04:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585de4cf3bb7b125d90174ab183f932eb74e2dd4050edc79b0ccd88631dafa8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585de4cf3bb7b125d90174ab183f932eb74e2dd4050edc79b0ccd88631dafa8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585de4cf3bb7b125d90174ab183f932eb74e2dd4050edc79b0ccd88631dafa8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585de4cf3bb7b125d90174ab183f932eb74e2dd4050edc79b0ccd88631dafa8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585de4cf3bb7b125d90174ab183f932eb74e2dd4050edc79b0ccd88631dafa8e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:38 np0005590810 podman[82774]: 2026-01-21 16:04:38.787267663 +0000 UTC m=+0.109491884 container init 9c20b5361e265e9438fe0d12138e4954fb9a2c0dcffb45024009ec084d03d956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:04:38 np0005590810 podman[82774]: 2026-01-21 16:04:38.699045716 +0000 UTC m=+0.021269967 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:38 np0005590810 podman[82774]: 2026-01-21 16:04:38.798000612 +0000 UTC m=+0.120224783 container start 9c20b5361e265e9438fe0d12138e4954fb9a2c0dcffb45024009ec084d03d956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:04:38 np0005590810 bash[82774]: 9c20b5361e265e9438fe0d12138e4954fb9a2c0dcffb45024009ec084d03d956
Jan 21 11:04:38 np0005590810 systemd[1]: Started Ceph osd.0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:04:38 np0005590810 ceph-osd[82794]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 11:04:38 np0005590810 ceph-osd[82794]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Jan 21 11:04:38 np0005590810 ceph-osd[82794]: pidfile_write: ignore empty --pid-file
Jan 21 11:04:38 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:38 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:38 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:38 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:38 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:04:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:04:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:39 np0005590810 podman[82903]: 2026-01-21 16:04:39.337187677 +0000 UTC m=+0.043515741 container create ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cannon, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:04:39 np0005590810 systemd[1]: Started libpod-conmon-ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625.scope.
Jan 21 11:04:39 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:39 np0005590810 podman[82903]: 2026-01-21 16:04:39.408898654 +0000 UTC m=+0.115226738 container init ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cannon, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:04:39 np0005590810 podman[82903]: 2026-01-21 16:04:39.3179316 +0000 UTC m=+0.024259714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:39 np0005590810 podman[82903]: 2026-01-21 16:04:39.416811144 +0000 UTC m=+0.123139208 container start ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:39 np0005590810 podman[82903]: 2026-01-21 16:04:39.420058821 +0000 UTC m=+0.126386905 container attach ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cannon, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 21 11:04:39 np0005590810 pedantic_cannon[82919]: 167 167
Jan 21 11:04:39 np0005590810 systemd[1]: libpod-ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625.scope: Deactivated successfully.
Jan 21 11:04:39 np0005590810 conmon[82919]: conmon ce1b5c17cd86f88a987c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625.scope/container/memory.events
Jan 21 11:04:39 np0005590810 podman[82903]: 2026-01-21 16:04:39.425583034 +0000 UTC m=+0.131911098 container died ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:39 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c140da587a2bba489536e603fb5cd28cbc148563d2763d37c6a4f1924c3430eb-merged.mount: Deactivated successfully.
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:39 np0005590810 podman[82903]: 2026-01-21 16:04:39.470098993 +0000 UTC m=+0.176427047 container remove ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:04:39 np0005590810 systemd[1]: libpod-conmon-ce1b5c17cd86f88a987ca977bcf05d322ea9455b27ee88cd41e76c4c1c991625.scope: Deactivated successfully.
Jan 21 11:04:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:39 np0005590810 podman[82950]: 2026-01-21 16:04:39.620522829 +0000 UTC m=+0.042341892 container create 743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:04:39 np0005590810 systemd[1]: Started libpod-conmon-743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a.scope.
Jan 21 11:04:39 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb0670d3173f0a98cba565f2109e138c8c64403d8f72b026b4e74c530c99208/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb0670d3173f0a98cba565f2109e138c8c64403d8f72b026b4e74c530c99208/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb0670d3173f0a98cba565f2109e138c8c64403d8f72b026b4e74c530c99208/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb0670d3173f0a98cba565f2109e138c8c64403d8f72b026b4e74c530c99208/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:39 np0005590810 podman[82950]: 2026-01-21 16:04:39.690721228 +0000 UTC m=+0.112540311 container init 743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dhawan, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:04:39 np0005590810 podman[82950]: 2026-01-21 16:04:39.604478457 +0000 UTC m=+0.026297540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:39 np0005590810 podman[82950]: 2026-01-21 16:04:39.699766827 +0000 UTC m=+0.121585890 container start 743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:04:39 np0005590810 podman[82950]: 2026-01-21 16:04:39.702578754 +0000 UTC m=+0.124397837 container attach 743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dhawan, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a719b5800 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:39 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:39 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:39 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:39 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: load: jerasure load: lrc 
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 11:04:39 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:40 np0005590810 lvm[83046]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:40 np0005590810 lvm[83046]: VG ceph_vg0 finished
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:40 np0005590810 boring_dhawan[82966]: {}
Jan 21 11:04:40 np0005590810 systemd[1]: libpod-743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a.scope: Deactivated successfully.
Jan 21 11:04:40 np0005590810 podman[82950]: 2026-01-21 16:04:40.357923515 +0000 UTC m=+0.779742578 container died 743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:04:40 np0005590810 systemd[1]: libpod-743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a.scope: Consumed 1.054s CPU time.
Jan 21 11:04:40 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ceb0670d3173f0a98cba565f2109e138c8c64403d8f72b026b4e74c530c99208-merged.mount: Deactivated successfully.
Jan 21 11:04:40 np0005590810 podman[82950]: 2026-01-21 16:04:40.399167839 +0000 UTC m=+0.820986892 container remove 743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:40 np0005590810 systemd[1]: libpod-conmon-743e39c3bf450fae59ad8bda5a789e133952032284e1844d14c59441454d9f2a.scope: Deactivated successfully.
Jan 21 11:04:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:04:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:04:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:40 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount shared_bdev_used = 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: RocksDB version: 7.9.2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Git sha 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: DB SUMMARY
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: DB Session ID:  F278HVM97VMUEGNVUPLD
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: CURRENT file:  CURRENT
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                         Options.error_if_exists: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.create_if_missing: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                                     Options.env: 0x557a72821ea0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                                Options.info_log: 0x557a72825800
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                              Options.statistics: (nil)
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.use_fsync: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                              Options.db_log_dir: 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.write_buffer_manager: 0x557a72916a00
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.unordered_write: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.row_cache: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                              Options.wal_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.two_write_queues: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.wal_compression: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.atomic_flush: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.max_background_jobs: 4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.max_background_compactions: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.max_subcompactions: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.max_open_files: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Compression algorithms supported:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kZSTD supported: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kXpressCompression supported: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kZlibCompression supported: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825be0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825be0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825be0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53c4448b-e615-421d-9327-0f9bf408ec11
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011481362192, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011481362468, "job": 1, "event": "recovery_finished"}
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: freelist init
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: freelist _read_cfg
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs umount
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 21 11:04:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bdev(0x557a7284d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluefs mount shared_bdev_used = 4718592
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: RocksDB version: 7.9.2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Git sha 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: DB SUMMARY
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: DB Session ID:  F278HVM97VMUEGNVUPLC
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: CURRENT file:  CURRENT
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                         Options.error_if_exists: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.create_if_missing: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                                     Options.env: 0x557a729b4380
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                                Options.info_log: 0x557a72a9c760
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                              Options.statistics: (nil)
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.use_fsync: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                              Options.db_log_dir: 
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.write_buffer_manager: 0x557a72916a00
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.unordered_write: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.row_cache: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                              Options.wal_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.two_write_queues: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.wal_compression: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.atomic_flush: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.max_background_jobs: 4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.max_background_compactions: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.max_subcompactions: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.max_open_files: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Compression algorithms supported:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kZSTD supported: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kXpressCompression supported: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kZlibCompression supported: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a728256e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a728256e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a728256e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a728256e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a728256e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a728256e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a728256e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825b20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825b20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:           Options.merge_operator: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a72825b20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a71a4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.compression: LZ4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.num_levels: 7
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.bloom_locality: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                               Options.ttl: 2592000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                       Options.enable_blob_files: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                           Options.min_blob_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53c4448b-e615-421d-9327-0f9bf408ec11
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011481639365, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011481642145, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011481, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c4448b-e615-421d-9327-0f9bf408ec11", "db_session_id": "F278HVM97VMUEGNVUPLC", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011481645301, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011481, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c4448b-e615-421d-9327-0f9bf408ec11", "db_session_id": "F278HVM97VMUEGNVUPLC", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011481648491, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011481, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c4448b-e615-421d-9327-0f9bf408ec11", "db_session_id": "F278HVM97VMUEGNVUPLC", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011481650026, "job": 1, "event": "recovery_finished"}
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557a72a20000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: DB pointer 0x557a729ca000
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 460.80 MB usag
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: _get_class not permitted to load lua
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: _get_class not permitted to load sdk
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: osd.0 0 load_pgs
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: osd.0 0 load_pgs opened 0 pgs
Jan 21 11:04:41 np0005590810 ceph-osd[82794]: osd.0 0 log_to_monitors true
Jan 21 11:04:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0[82790]: 2026-01-21T16:04:41.675+0000 7f4718564740 -1 osd.0 0 log_to_monitors true
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:04:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 podman[83627]: 2026-01-21 16:04:42.052489255 +0000 UTC m=+0.061497670 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:04:42 np0005590810 podman[83627]: 2026-01-21 16:04:42.176218542 +0000 UTC m=+0.185226957 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: from='osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: from='osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 21 11:04:42 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:42 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:42 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 21 11:04:42 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 21 11:04:43 np0005590810 ceph-osd[82794]: osd.0 0 done with init, starting boot process
Jan 21 11:04:43 np0005590810 ceph-osd[82794]: osd.0 0 start_boot
Jan 21 11:04:43 np0005590810 ceph-osd[82794]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 21 11:04:43 np0005590810 ceph-osd[82794]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 21 11:04:43 np0005590810 ceph-osd[82794]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 21 11:04:43 np0005590810 ceph-osd[82794]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 21 11:04:43 np0005590810 ceph-osd[82794]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:43 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:43 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 11:04:43 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3490583477; not ready for session (expect reconnect)
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:43 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:43 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4259535422; not ready for session (expect reconnect)
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:43 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 11:04:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:04:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:43 np0005590810 podman[83884]: 2026-01-21 16:04:43.676650355 +0000 UTC m=+0.047887346 container create 6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:04:43 np0005590810 systemd[1]: Started libpod-conmon-6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989.scope.
Jan 21 11:04:43 np0005590810 podman[83884]: 2026-01-21 16:04:43.653479672 +0000 UTC m=+0.024716693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:43 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:43 np0005590810 podman[83884]: 2026-01-21 16:04:43.787686376 +0000 UTC m=+0.158923387 container init 6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:04:43 np0005590810 podman[83884]: 2026-01-21 16:04:43.796866876 +0000 UTC m=+0.168103867 container start 6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Jan 21 11:04:43 np0005590810 mystifying_blackwell[83900]: 167 167
Jan 21 11:04:43 np0005590810 systemd[1]: libpod-6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989.scope: Deactivated successfully.
Jan 21 11:04:43 np0005590810 conmon[83900]: conmon 6682f6562a684fcb12c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989.scope/container/memory.events
Jan 21 11:04:43 np0005590810 podman[83884]: 2026-01-21 16:04:43.805664996 +0000 UTC m=+0.176902007 container attach 6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:04:43 np0005590810 podman[83884]: 2026-01-21 16:04:43.806049366 +0000 UTC m=+0.177286357 container died 6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:43 np0005590810 systemd[1]: var-lib-containers-storage-overlay-62bfdf53105f7fcfcf6df0493899a17a5d7fcaf06fc02bcc91ab9e092ff9626d-merged.mount: Deactivated successfully.
Jan 21 11:04:43 np0005590810 podman[83884]: 2026-01-21 16:04:43.920797053 +0000 UTC m=+0.292034044 container remove 6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:04:43 np0005590810 systemd[1]: libpod-conmon-6682f6562a684fcb12c19d6149e0cf23c620b582ce117e7816e49d71a7bd4989.scope: Deactivated successfully.
Jan 21 11:04:44 np0005590810 podman[83922]: 2026-01-21 16:04:44.142736815 +0000 UTC m=+0.075573953 container create 65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 21 11:04:44 np0005590810 systemd[1]: Started libpod-conmon-65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275.scope.
Jan 21 11:04:44 np0005590810 podman[83922]: 2026-01-21 16:04:44.11300327 +0000 UTC m=+0.045840428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:04:44 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe2da5a1ce8e6b0d87a1f2d09eaec218b7a4ba07ca39ebc9a953a15233fe28b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe2da5a1ce8e6b0d87a1f2d09eaec218b7a4ba07ca39ebc9a953a15233fe28b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe2da5a1ce8e6b0d87a1f2d09eaec218b7a4ba07ca39ebc9a953a15233fe28b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe2da5a1ce8e6b0d87a1f2d09eaec218b7a4ba07ca39ebc9a953a15233fe28b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:44 np0005590810 podman[83922]: 2026-01-21 16:04:44.236538127 +0000 UTC m=+0.169375265 container init 65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:04:44 np0005590810 podman[83922]: 2026-01-21 16:04:44.249153322 +0000 UTC m=+0.181990460 container start 65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lovelace, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 21 11:04:44 np0005590810 podman[83922]: 2026-01-21 16:04:44.262297828 +0000 UTC m=+0.195134996 container attach 65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:04:44 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3490583477; not ready for session (expect reconnect)
Jan 21 11:04:44 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4259535422; not ready for session (expect reconnect)
Jan 21 11:04:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:44 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:44 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 11:04:44 np0005590810 ceph-mon[74380]: from='osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 11:04:44 np0005590810 ceph-mon[74380]: from='osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 21 11:04:44 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]: [
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:    {
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "available": false,
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "being_replaced": false,
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "ceph_device_lvm": false,
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "lsm_data": {},
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "lvs": [],
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "path": "/dev/sr0",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "rejected_reasons": [
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "Has a FileSystem",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "Insufficient space (<5GB)"
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        ],
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        "sys_api": {
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "actuators": null,
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "device_nodes": [
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:                "sr0"
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            ],
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "devname": "sr0",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "human_readable_size": "482.00 KB",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "id_bus": "ata",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "model": "QEMU DVD-ROM",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "nr_requests": "2",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "parent": "/dev/sr0",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "partitions": {},
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "path": "/dev/sr0",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "removable": "1",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "rev": "2.5+",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "ro": "0",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "rotational": "1",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "sas_address": "",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "sas_device_handle": "",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "scheduler_mode": "mq-deadline",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "sectors": 0,
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "sectorsize": "2048",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "size": 493568.0,
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "support_discard": "2048",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "type": "disk",
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:            "vendor": "QEMU"
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:        }
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]:    }
Jan 21 11:04:45 np0005590810 objective_lovelace[83939]: ]
Jan 21 11:04:45 np0005590810 systemd[1]: libpod-65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275.scope: Deactivated successfully.
Jan 21 11:04:45 np0005590810 podman[85101]: 2026-01-21 16:04:45.163423642 +0000 UTC m=+0.062595756 container died 65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:04:45 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dbe2da5a1ce8e6b0d87a1f2d09eaec218b7a4ba07ca39ebc9a953a15233fe28b-merged.mount: Deactivated successfully.
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3490583477; not ready for session (expect reconnect)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4259535422; not ready for session (expect reconnect)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 11:04:45 np0005590810 podman[85101]: 2026-01-21 16:04:45.349629009 +0000 UTC m=+0.248801083 container remove 65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:45 np0005590810 systemd[1]: libpod-conmon-65165e3acd53bd5fc50004c776c0758e0a37b475ac3412325397d9eadab46275.scope: Deactivated successfully.
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 21 11:04:45 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 21 11:04:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3490583477; not ready for session (expect reconnect)
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4259535422; not ready for session (expect reconnect)
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 21 11:04:46 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:04:46 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.752 iops: 6848.553 elapsed_sec: 0.438
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: log_channel(cluster) log [WRN] : OSD bench result of 6848.553425 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 0 waiting for initial osdmap
Jan 21 11:04:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0[82790]: 2026-01-21T16:04:46.783+0000 7f47144e7640 -1 osd.0 0 waiting for initial osdmap
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 7 set_numa_affinity not setting numa affinity
Jan 21 11:04:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-osd-0[82790]: 2026-01-21T16:04:46.800+0000 7f470fb0f640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 11:04:46 np0005590810 ceph-osd[82794]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 21 11:04:47 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3490583477; not ready for session (expect reconnect)
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:47 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 11:04:47 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4259535422; not ready for session (expect reconnect)
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:47 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e8 e8: 2 total, 2 up, 2 in
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477] boot
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422] boot
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 2 up, 2 in
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:04:47 np0005590810 ceph-osd[82794]: osd.0 8 state: booting -> active
Jan 21 11:04:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 11:04:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:48 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] creating mgr pool
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: OSD bench result of 6848.553425 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: OSD bench result of 5772.140328 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: osd.0 [v2:192.168.122.100:6802/3490583477,v1:192.168.122.100:6803/3490583477] boot
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: osd.1 [v2:192.168.122.101:6800/4259535422,v1:192.168.122.101:6801/4259535422] boot
Jan 21 11:04:48 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Jan 21 11:04:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 21 11:04:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 21 11:04:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 11:04:49 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 21 11:04:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 21 11:04:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 21 11:04:50 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 21 11:04:51 np0005590810 ceph-osd[82794]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 21 11:04:51 np0005590810 ceph-osd[82794]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 21 11:04:51 np0005590810 ceph-osd[82794]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 21 11:04:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 21 11:04:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 21 11:04:51 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 21 11:04:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 21 11:04:52 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 21 11:04:52 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] creating main.db for devicehealth
Jan 21 11:04:52 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Check health
Jan 21 11:04:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 21 11:04:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 21 11:04:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 11:04:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 11:04:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 21 11:04:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 21 11:04:53 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 21 11:04:53 np0005590810 ceph-mon[74380]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 21 11:04:53 np0005590810 ceph-mon[74380]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 21 11:04:53 np0005590810 python3[85156]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:04:53 np0005590810 podman[85158]: 2026-01-21 16:04:53.364620922 +0000 UTC m=+0.045713779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:04:53 np0005590810 podman[85158]: 2026-01-21 16:04:53.470393653 +0000 UTC m=+0.151486490 container create 2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13 (image=quay.io/ceph/ceph:v19, name=tender_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 11:04:53 np0005590810 systemd[1]: Started libpod-conmon-2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13.scope.
Jan 21 11:04:53 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19a1175d298d4e204e0ed6cf51c06714f3a1a60cfb5f34354727ff7b5b3ca79/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19a1175d298d4e204e0ed6cf51c06714f3a1a60cfb5f34354727ff7b5b3ca79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19a1175d298d4e204e0ed6cf51c06714f3a1a60cfb5f34354727ff7b5b3ca79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:04:53 np0005590810 podman[85158]: 2026-01-21 16:04:53.547194947 +0000 UTC m=+0.228287824 container init 2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13 (image=quay.io/ceph/ceph:v19, name=tender_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:04:53 np0005590810 podman[85158]: 2026-01-21 16:04:53.553774711 +0000 UTC m=+0.234867538 container start 2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13 (image=quay.io/ceph/ceph:v19, name=tender_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:53 np0005590810 podman[85158]: 2026-01-21 16:04:53.55697021 +0000 UTC m=+0.238063067 container attach 2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13 (image=quay.io/ceph/ceph:v19, name=tender_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:04:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 11:04:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/354382977' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 11:04:54 np0005590810 tender_carver[85174]: 
Jan 21 11:04:54 np0005590810 tender_carver[85174]: {"fsid":"d9745984-fea8-5195-8ec5-61f685b5c785","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":122,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1769011487,"num_in_osds":2,"osd_in_since":1769011468,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":893968384,"bytes_avail":42047315968,"bytes_total":42941284352,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-21T16:02:49:724869+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T16:04:17.975788+0000","services":{}},"progress_events":{}}
Jan 21 11:04:54 np0005590810 systemd[1]: libpod-2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13.scope: Deactivated successfully.
Jan 21 11:04:54 np0005590810 podman[85158]: 2026-01-21 16:04:54.034419963 +0000 UTC m=+0.715512800 container died 2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13 (image=quay.io/ceph/ceph:v19, name=tender_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:04:54 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ygffhs(active, since 98s)
Jan 21 11:04:54 np0005590810 systemd[1]: var-lib-containers-storage-overlay-e19a1175d298d4e204e0ed6cf51c06714f3a1a60cfb5f34354727ff7b5b3ca79-merged.mount: Deactivated successfully.
Jan 21 11:04:54 np0005590810 podman[85158]: 2026-01-21 16:04:54.069893534 +0000 UTC m=+0.750986361 container remove 2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13 (image=quay.io/ceph/ceph:v19, name=tender_carver, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:04:54 np0005590810 systemd[1]: libpod-conmon-2ec151b1966f5d6da83536f7fdcd31aaffbc676b67633fbd1ddced57aa216c13.scope: Deactivated successfully.
Jan 21 11:04:54 np0005590810 python3[85236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:04:54 np0005590810 podman[85237]: 2026-01-21 16:04:54.570602989 +0000 UTC m=+0.042253902 container create 910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a (image=quay.io/ceph/ceph:v19, name=determined_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:04:54 np0005590810 systemd[1]: Started libpod-conmon-910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a.scope.
Jan 21 11:04:54 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:54 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad65a92e8a6ec9619bcae7e08db44e9ad9d2c7820aa554d0e70f20f68d09565/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:54 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad65a92e8a6ec9619bcae7e08db44e9ad9d2c7820aa554d0e70f20f68d09565/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:54 np0005590810 podman[85237]: 2026-01-21 16:04:54.626284147 +0000 UTC m=+0.097935090 container init 910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a (image=quay.io/ceph/ceph:v19, name=determined_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:04:54 np0005590810 podman[85237]: 2026-01-21 16:04:54.631650103 +0000 UTC m=+0.103301016 container start 910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a (image=quay.io/ceph/ceph:v19, name=determined_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:54 np0005590810 podman[85237]: 2026-01-21 16:04:54.634932675 +0000 UTC m=+0.106583588 container attach 910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a (image=quay.io/ceph/ceph:v19, name=determined_swirles, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:04:54 np0005590810 podman[85237]: 2026-01-21 16:04:54.553582201 +0000 UTC m=+0.025233134 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2933388630' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2933388630' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2933388630' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 21 11:04:55 np0005590810 determined_swirles[85253]: pool 'vms' created
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 21 11:04:55 np0005590810 systemd[1]: libpod-910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a.scope: Deactivated successfully.
Jan 21 11:04:55 np0005590810 podman[85237]: 2026-01-21 16:04:55.069542608 +0000 UTC m=+0.541193541 container died 910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a (image=quay.io/ceph/ceph:v19, name=determined_swirles, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:04:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3ad65a92e8a6ec9619bcae7e08db44e9ad9d2c7820aa554d0e70f20f68d09565-merged.mount: Deactivated successfully.
Jan 21 11:04:55 np0005590810 podman[85237]: 2026-01-21 16:04:55.101281984 +0000 UTC m=+0.572932897 container remove 910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a (image=quay.io/ceph/ceph:v19, name=determined_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 21 11:04:55 np0005590810 systemd[1]: libpod-conmon-910794e56fd38bb95eeeebf9d79e95521583faa096ad48b35d380d9715feb42a.scope: Deactivated successfully.
Jan 21 11:04:55 np0005590810 python3[85317]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:04:55 np0005590810 podman[85318]: 2026-01-21 16:04:55.432031655 +0000 UTC m=+0.039659781 container create 456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa (image=quay.io/ceph/ceph:v19, name=bold_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:55 np0005590810 systemd[1]: Started libpod-conmon-456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa.scope.
Jan 21 11:04:55 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f6036b44468f4526f72595c53962f0994d9c0c27ff7dc8887b499df14ff099/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f6036b44468f4526f72595c53962f0994d9c0c27ff7dc8887b499df14ff099/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:55 np0005590810 podman[85318]: 2026-01-21 16:04:55.414514002 +0000 UTC m=+0.022142148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:04:55 np0005590810 podman[85318]: 2026-01-21 16:04:55.512794601 +0000 UTC m=+0.120422747 container init 456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa (image=quay.io/ceph/ceph:v19, name=bold_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:04:55 np0005590810 podman[85318]: 2026-01-21 16:04:55.517838907 +0000 UTC m=+0.125467033 container start 456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa (image=quay.io/ceph/ceph:v19, name=bold_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:04:55 np0005590810 podman[85318]: 2026-01-21 16:04:55.520785949 +0000 UTC m=+0.128414075 container attach 456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa (image=quay.io/ceph/ceph:v19, name=bold_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:04:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v56: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 11:04:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3649606039' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2933388630' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3649606039' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3649606039' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 21 11:04:56 np0005590810 bold_cerf[85333]: pool 'volumes' created
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 21 11:04:56 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:04:56 np0005590810 systemd[1]: libpod-456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa.scope: Deactivated successfully.
Jan 21 11:04:56 np0005590810 podman[85318]: 2026-01-21 16:04:56.080601187 +0000 UTC m=+0.688229303 container died 456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa (image=quay.io/ceph/ceph:v19, name=bold_cerf, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:04:56 np0005590810 systemd[1]: var-lib-containers-storage-overlay-35f6036b44468f4526f72595c53962f0994d9c0c27ff7dc8887b499df14ff099-merged.mount: Deactivated successfully.
Jan 21 11:04:56 np0005590810 podman[85318]: 2026-01-21 16:04:56.118591107 +0000 UTC m=+0.726219233 container remove 456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa (image=quay.io/ceph/ceph:v19, name=bold_cerf, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:04:56 np0005590810 systemd[1]: libpod-conmon-456729d4b122ad03a9095000e1d340eaa0e0a29fc421345277b5773e42db6ffa.scope: Deactivated successfully.
Jan 21 11:04:56 np0005590810 python3[85395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:04:56 np0005590810 podman[85396]: 2026-01-21 16:04:56.474358074 +0000 UTC m=+0.040949592 container create a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e (image=quay.io/ceph/ceph:v19, name=fervent_northcutt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 21 11:04:56 np0005590810 systemd[1]: Started libpod-conmon-a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e.scope.
Jan 21 11:04:56 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571b5599ff5446afb6f0aae621cfd4bb2a22123aa4508c7bcef49ed4233d07ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571b5599ff5446afb6f0aae621cfd4bb2a22123aa4508c7bcef49ed4233d07ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:56 np0005590810 podman[85396]: 2026-01-21 16:04:56.543176089 +0000 UTC m=+0.109767627 container init a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e (image=quay.io/ceph/ceph:v19, name=fervent_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:04:56 np0005590810 podman[85396]: 2026-01-21 16:04:56.548307478 +0000 UTC m=+0.114898986 container start a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e (image=quay.io/ceph/ceph:v19, name=fervent_northcutt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:04:56 np0005590810 podman[85396]: 2026-01-21 16:04:56.551416285 +0000 UTC m=+0.118007823 container attach a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e (image=quay.io/ceph/ceph:v19, name=fervent_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:56 np0005590810 podman[85396]: 2026-01-21 16:04:56.45714642 +0000 UTC m=+0.023737958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 11:04:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4030775083' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3649606039' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/4030775083' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4030775083' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 21 11:04:57 np0005590810 fervent_northcutt[85412]: pool 'backups' created
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 21 11:04:57 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:04:57 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:04:57 np0005590810 systemd[1]: libpod-a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e.scope: Deactivated successfully.
Jan 21 11:04:57 np0005590810 podman[85396]: 2026-01-21 16:04:57.088202969 +0000 UTC m=+0.654794487 container died a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e (image=quay.io/ceph/ceph:v19, name=fervent_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:04:57 np0005590810 systemd[1]: var-lib-containers-storage-overlay-571b5599ff5446afb6f0aae621cfd4bb2a22123aa4508c7bcef49ed4233d07ff-merged.mount: Deactivated successfully.
Jan 21 11:04:57 np0005590810 podman[85396]: 2026-01-21 16:04:57.122355849 +0000 UTC m=+0.688947367 container remove a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e (image=quay.io/ceph/ceph:v19, name=fervent_northcutt, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:57 np0005590810 systemd[1]: libpod-conmon-a36b2cdbdf66dbd3bfd2204beeb4c8da07f970ba91cb1bec1017ff8747522f7e.scope: Deactivated successfully.
Jan 21 11:04:57 np0005590810 python3[85475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:04:57 np0005590810 podman[85476]: 2026-01-21 16:04:57.442396479 +0000 UTC m=+0.047521906 container create bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6 (image=quay.io/ceph/ceph:v19, name=stupefied_bouman, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:04:57 np0005590810 systemd[1]: Started libpod-conmon-bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6.scope.
Jan 21 11:04:57 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26434f160fb72c17941a59e685cfb4f6e8375a2cf20bae0914b3e312c95a1ffa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26434f160fb72c17941a59e685cfb4f6e8375a2cf20bae0914b3e312c95a1ffa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:57 np0005590810 podman[85476]: 2026-01-21 16:04:57.424216994 +0000 UTC m=+0.029342431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:04:57 np0005590810 podman[85476]: 2026-01-21 16:04:57.528781269 +0000 UTC m=+0.133906706 container init bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6 (image=quay.io/ceph/ceph:v19, name=stupefied_bouman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 11:04:57 np0005590810 podman[85476]: 2026-01-21 16:04:57.538492159 +0000 UTC m=+0.143617576 container start bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6 (image=quay.io/ceph/ceph:v19, name=stupefied_bouman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:57 np0005590810 podman[85476]: 2026-01-21 16:04:57.541966947 +0000 UTC m=+0.147092364 container attach bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6 (image=quay.io/ceph/ceph:v19, name=stupefied_bouman, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:04:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v59: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 11:04:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/583775913' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 21 11:04:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/583775913' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 21 11:04:58 np0005590810 stupefied_bouman[85491]: pool 'images' created
Jan 21 11:04:58 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/4030775083' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:58 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/583775913' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:58 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 21 11:04:58 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 17 pg[5.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:04:58 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:04:58 np0005590810 systemd[1]: libpod-bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6.scope: Deactivated successfully.
Jan 21 11:04:58 np0005590810 podman[85476]: 2026-01-21 16:04:58.099703952 +0000 UTC m=+0.704829359 container died bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6 (image=quay.io/ceph/ceph:v19, name=stupefied_bouman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 21 11:04:58 np0005590810 systemd[1]: var-lib-containers-storage-overlay-26434f160fb72c17941a59e685cfb4f6e8375a2cf20bae0914b3e312c95a1ffa-merged.mount: Deactivated successfully.
Jan 21 11:04:58 np0005590810 podman[85476]: 2026-01-21 16:04:58.135146921 +0000 UTC m=+0.740272338 container remove bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6 (image=quay.io/ceph/ceph:v19, name=stupefied_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:04:58 np0005590810 systemd[1]: libpod-conmon-bb46ff43ae207881106daa1aaa82b7501f061aa9f021327e5a1a4fdbf53e05f6.scope: Deactivated successfully.
Jan 21 11:04:58 np0005590810 python3[85554]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:04:58 np0005590810 podman[85555]: 2026-01-21 16:04:58.464315984 +0000 UTC m=+0.038894087 container create 2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326 (image=quay.io/ceph/ceph:v19, name=goofy_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 11:04:58 np0005590810 systemd[1]: Started libpod-conmon-2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326.scope.
Jan 21 11:04:58 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8aafe107dbcaa0a955ffba5190fba11ebcc516419e3829188b30d93965237d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8aafe107dbcaa0a955ffba5190fba11ebcc516419e3829188b30d93965237d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:58 np0005590810 podman[85555]: 2026-01-21 16:04:58.520611621 +0000 UTC m=+0.095189724 container init 2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326 (image=quay.io/ceph/ceph:v19, name=goofy_nash, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:04:58 np0005590810 podman[85555]: 2026-01-21 16:04:58.525393989 +0000 UTC m=+0.099972092 container start 2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326 (image=quay.io/ceph/ceph:v19, name=goofy_nash, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:04:58 np0005590810 podman[85555]: 2026-01-21 16:04:58.528661441 +0000 UTC m=+0.103239564 container attach 2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326 (image=quay.io/ceph/ceph:v19, name=goofy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 11:04:58 np0005590810 podman[85555]: 2026-01-21 16:04:58.446485811 +0000 UTC m=+0.021063944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:04:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 11:04:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2763732120' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 21 11:04:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2763732120' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 21 11:04:59 np0005590810 goofy_nash[85571]: pool 'cephfs.cephfs.meta' created
Jan 21 11:04:59 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 21 11:04:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:04:59 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/583775913' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:04:59 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2763732120' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:04:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 18 pg[5.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:04:59 np0005590810 systemd[1]: libpod-2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326.scope: Deactivated successfully.
Jan 21 11:04:59 np0005590810 podman[85555]: 2026-01-21 16:04:59.09655561 +0000 UTC m=+0.671133713 container died 2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326 (image=quay.io/ceph/ceph:v19, name=goofy_nash, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:04:59 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ec8aafe107dbcaa0a955ffba5190fba11ebcc516419e3829188b30d93965237d-merged.mount: Deactivated successfully.
Jan 21 11:04:59 np0005590810 podman[85555]: 2026-01-21 16:04:59.128391187 +0000 UTC m=+0.702969290 container remove 2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326 (image=quay.io/ceph/ceph:v19, name=goofy_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 21 11:04:59 np0005590810 systemd[1]: libpod-conmon-2560cb8584c056ac601f0adb59f756e085a7d79bba7e1173d2bea3b3961c1326.scope: Deactivated successfully.
Jan 21 11:04:59 np0005590810 python3[85635]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:04:59 np0005590810 podman[85636]: 2026-01-21 16:04:59.474208847 +0000 UTC m=+0.043557632 container create ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd (image=quay.io/ceph/ceph:v19, name=musing_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 11:04:59 np0005590810 systemd[1]: Started libpod-conmon-ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd.scope.
Jan 21 11:04:59 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:04:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec430f65206aa157a5dbb690912238957fa4c640afb85135e6ec6e011317f14c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec430f65206aa157a5dbb690912238957fa4c640afb85135e6ec6e011317f14c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:04:59 np0005590810 podman[85636]: 2026-01-21 16:04:59.529573044 +0000 UTC m=+0.098921859 container init ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd (image=quay.io/ceph/ceph:v19, name=musing_lehmann, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:04:59 np0005590810 podman[85636]: 2026-01-21 16:04:59.534822247 +0000 UTC m=+0.104171042 container start ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd (image=quay.io/ceph/ceph:v19, name=musing_lehmann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 21 11:04:59 np0005590810 podman[85636]: 2026-01-21 16:04:59.539245335 +0000 UTC m=+0.108594130 container attach ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd (image=quay.io/ceph/ceph:v19, name=musing_lehmann, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:04:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v62: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:04:59 np0005590810 podman[85636]: 2026-01-21 16:04:59.457282432 +0000 UTC m=+0.026631247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:04:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 11:04:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3664186009' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:05:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 21 11:05:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3664186009' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:05:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Jan 21 11:05:00 np0005590810 musing_lehmann[85651]: pool 'cephfs.cephfs.data' created
Jan 21 11:05:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Jan 21 11:05:00 np0005590810 systemd[1]: libpod-ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd.scope: Deactivated successfully.
Jan 21 11:05:00 np0005590810 podman[85636]: 2026-01-21 16:05:00.149034984 +0000 UTC m=+0.718383779 container died ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd (image=quay.io/ceph/ceph:v19, name=musing_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:05:00 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2763732120' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:05:00 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3664186009' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 11:05:00 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 19 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:00 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ec430f65206aa157a5dbb690912238957fa4c640afb85135e6ec6e011317f14c-merged.mount: Deactivated successfully.
Jan 21 11:05:00 np0005590810 podman[85636]: 2026-01-21 16:05:00.376728959 +0000 UTC m=+0.946077754 container remove ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd (image=quay.io/ceph/ceph:v19, name=musing_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:00 np0005590810 systemd[1]: libpod-conmon-ede985bb7f73829c7d790bf8ca1813fec8dbf041710fe67f8564bb2b246b5cbd.scope: Deactivated successfully.
Jan 21 11:05:00 np0005590810 python3[85716]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:00 np0005590810 podman[85717]: 2026-01-21 16:05:00.754277023 +0000 UTC m=+0.039759095 container create 87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038 (image=quay.io/ceph/ceph:v19, name=boring_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:05:00 np0005590810 systemd[1]: Started libpod-conmon-87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038.scope.
Jan 21 11:05:00 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:00 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395a5a9860d23a15f90e89d489663e9c5905ced2d32a5f281280859946f820e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:00 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395a5a9860d23a15f90e89d489663e9c5905ced2d32a5f281280859946f820e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:00 np0005590810 podman[85717]: 2026-01-21 16:05:00.817958608 +0000 UTC m=+0.103440700 container init 87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038 (image=quay.io/ceph/ceph:v19, name=boring_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:05:00 np0005590810 podman[85717]: 2026-01-21 16:05:00.822824659 +0000 UTC m=+0.108306731 container start 87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038 (image=quay.io/ceph/ceph:v19, name=boring_leavitt, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:00 np0005590810 podman[85717]: 2026-01-21 16:05:00.826891846 +0000 UTC m=+0.112373938 container attach 87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038 (image=quay.io/ceph/ceph:v19, name=boring_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:05:00 np0005590810 podman[85717]: 2026-01-21 16:05:00.736706397 +0000 UTC m=+0.022188489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 21 11:05:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Jan 21 11:05:01 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 21 11:05:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 21 11:05:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1137290245' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 21 11:05:01 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3664186009' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 11:05:01 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1137290245' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 21 11:05:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 2 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1137290245' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Jan 21 11:05:02 np0005590810 boring_leavitt[85732]: enabled application 'rbd' on pool 'vms'
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Jan 21 11:05:02 np0005590810 systemd[1]: libpod-87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038.scope: Deactivated successfully.
Jan 21 11:05:02 np0005590810 podman[85717]: 2026-01-21 16:05:02.157956142 +0000 UTC m=+1.443438214 container died 87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038 (image=quay.io/ceph/ceph:v19, name=boring_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:05:02 np0005590810 systemd[1]: var-lib-containers-storage-overlay-395a5a9860d23a15f90e89d489663e9c5905ced2d32a5f281280859946f820e2-merged.mount: Deactivated successfully.
Jan 21 11:05:02 np0005590810 podman[85717]: 2026-01-21 16:05:02.206693905 +0000 UTC m=+1.492175977 container remove 87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038 (image=quay.io/ceph/ceph:v19, name=boring_leavitt, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:05:02 np0005590810 systemd[1]: libpod-conmon-87a94215696ab09baa22803333ae34ea763fa1ad2a81a485efe9028a02954038.scope: Deactivated successfully.
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1137290245' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 21 11:05:02 np0005590810 python3[85795]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:02 np0005590810 podman[85796]: 2026-01-21 16:05:02.536738245 +0000 UTC m=+0.042423808 container create 120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288 (image=quay.io/ceph/ceph:v19, name=musing_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:05:02 np0005590810 systemd[1]: Started libpod-conmon-120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288.scope.
Jan 21 11:05:02 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:02 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcfc8cc26c3b58de10d260e92991269640064ac8f1a8aca62cccec477b0df96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:02 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcfc8cc26c3b58de10d260e92991269640064ac8f1a8aca62cccec477b0df96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:02 np0005590810 podman[85796]: 2026-01-21 16:05:02.607566913 +0000 UTC m=+0.113252486 container init 120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288 (image=quay.io/ceph/ceph:v19, name=musing_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:02 np0005590810 podman[85796]: 2026-01-21 16:05:02.518037815 +0000 UTC m=+0.023723398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:02 np0005590810 podman[85796]: 2026-01-21 16:05:02.613823836 +0000 UTC m=+0.119509399 container start 120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288 (image=quay.io/ceph/ceph:v19, name=musing_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:05:02 np0005590810 podman[85796]: 2026-01-21 16:05:02.617033786 +0000 UTC m=+0.122719469 container attach 120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288 (image=quay.io/ceph/ceph:v19, name=musing_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 21 11:05:02 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4097045334' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 21 11:05:03 np0005590810 ceph-mon[74380]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 11:05:03 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/4097045334' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 21 11:05:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 21 11:05:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4097045334' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 21 11:05:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Jan 21 11:05:03 np0005590810 musing_proskuriakova[85812]: enabled application 'rbd' on pool 'volumes'
Jan 21 11:05:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 21 11:05:03 np0005590810 systemd[1]: libpod-120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288.scope: Deactivated successfully.
Jan 21 11:05:03 np0005590810 podman[85796]: 2026-01-21 16:05:03.268375354 +0000 UTC m=+0.774060957 container died 120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288 (image=quay.io/ceph/ceph:v19, name=musing_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:03 np0005590810 systemd[1]: var-lib-containers-storage-overlay-fbcfc8cc26c3b58de10d260e92991269640064ac8f1a8aca62cccec477b0df96-merged.mount: Deactivated successfully.
Jan 21 11:05:03 np0005590810 podman[85796]: 2026-01-21 16:05:03.311322166 +0000 UTC m=+0.817007729 container remove 120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288 (image=quay.io/ceph/ceph:v19, name=musing_proskuriakova, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:05:03 np0005590810 systemd[1]: libpod-conmon-120b973cc8878b0e581b6bda95e89375df9fe1e39592499f29ceb51f1dae4288.scope: Deactivated successfully.
Jan 21 11:05:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:03 np0005590810 python3[85874]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:03 np0005590810 podman[85875]: 2026-01-21 16:05:03.666835947 +0000 UTC m=+0.046351839 container create 75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d (image=quay.io/ceph/ceph:v19, name=tender_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:03 np0005590810 systemd[1]: Started libpod-conmon-75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d.scope.
Jan 21 11:05:03 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:03 np0005590810 podman[85875]: 2026-01-21 16:05:03.645704151 +0000 UTC m=+0.025220083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e040d1f291f70c84e47a87236fc3c4aa608730ded58ebedeec334ff59e1ff9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e040d1f291f70c84e47a87236fc3c4aa608730ded58ebedeec334ff59e1ff9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:03 np0005590810 podman[85875]: 2026-01-21 16:05:03.753508456 +0000 UTC m=+0.133024368 container init 75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d (image=quay.io/ceph/ceph:v19, name=tender_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:03 np0005590810 podman[85875]: 2026-01-21 16:05:03.75877396 +0000 UTC m=+0.138289852 container start 75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d (image=quay.io/ceph/ceph:v19, name=tender_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:05:03 np0005590810 podman[85875]: 2026-01-21 16:05:03.761858225 +0000 UTC m=+0.141374137 container attach 75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d (image=quay.io/ceph/ceph:v19, name=tender_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 21 11:05:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4065103768' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 21 11:05:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 21 11:05:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4065103768' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 21 11:05:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Jan 21 11:05:04 np0005590810 tender_johnson[85891]: enabled application 'rbd' on pool 'backups'
Jan 21 11:05:04 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Jan 21 11:05:04 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/4097045334' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 21 11:05:04 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/4065103768' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 21 11:05:04 np0005590810 systemd[1]: libpod-75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d.scope: Deactivated successfully.
Jan 21 11:05:04 np0005590810 podman[85875]: 2026-01-21 16:05:04.261177277 +0000 UTC m=+0.640693169 container died 75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d (image=quay.io/ceph/ceph:v19, name=tender_johnson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:05:04 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9e040d1f291f70c84e47a87236fc3c4aa608730ded58ebedeec334ff59e1ff9e-merged.mount: Deactivated successfully.
Jan 21 11:05:04 np0005590810 podman[85875]: 2026-01-21 16:05:04.293598843 +0000 UTC m=+0.673114735 container remove 75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d (image=quay.io/ceph/ceph:v19, name=tender_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 21 11:05:04 np0005590810 systemd[1]: libpod-conmon-75816c5b597e4daab64804b16e29e2eb58710363b836a004a6de97ba2b444a8d.scope: Deactivated successfully.
Jan 21 11:05:04 np0005590810 python3[85953]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:04 np0005590810 podman[85954]: 2026-01-21 16:05:04.627865574 +0000 UTC m=+0.041430066 container create d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93 (image=quay.io/ceph/ceph:v19, name=great_gagarin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:04 np0005590810 systemd[1]: Started libpod-conmon-d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93.scope.
Jan 21 11:05:04 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd7f3b662a8f098b3283658a9aaa07dcf3a6f283938469f984477130ddfc92b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd7f3b662a8f098b3283658a9aaa07dcf3a6f283938469f984477130ddfc92b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:04 np0005590810 podman[85954]: 2026-01-21 16:05:04.610263777 +0000 UTC m=+0.023828289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:04 np0005590810 podman[85954]: 2026-01-21 16:05:04.713120109 +0000 UTC m=+0.126684621 container init d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93 (image=quay.io/ceph/ceph:v19, name=great_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 11:05:04 np0005590810 podman[85954]: 2026-01-21 16:05:04.718105734 +0000 UTC m=+0.131670226 container start d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93 (image=quay.io/ceph/ceph:v19, name=great_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:04 np0005590810 podman[85954]: 2026-01-21 16:05:04.721429807 +0000 UTC m=+0.134994619 container attach d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93 (image=quay.io/ceph/ceph:v19, name=great_gagarin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 21 11:05:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1692143545' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 21 11:05:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 21 11:05:05 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/4065103768' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 21 11:05:05 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1692143545' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 21 11:05:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1692143545' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 21 11:05:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Jan 21 11:05:05 np0005590810 great_gagarin[85969]: enabled application 'rbd' on pool 'images'
Jan 21 11:05:05 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 21 11:05:05 np0005590810 systemd[1]: libpod-d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93.scope: Deactivated successfully.
Jan 21 11:05:05 np0005590810 podman[85954]: 2026-01-21 16:05:05.277126858 +0000 UTC m=+0.690691350 container died d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93 (image=quay.io/ceph/ceph:v19, name=great_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:05:05 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ecd7f3b662a8f098b3283658a9aaa07dcf3a6f283938469f984477130ddfc92b-merged.mount: Deactivated successfully.
Jan 21 11:05:05 np0005590810 podman[85954]: 2026-01-21 16:05:05.312339641 +0000 UTC m=+0.725904133 container remove d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93 (image=quay.io/ceph/ceph:v19, name=great_gagarin, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:05:05 np0005590810 systemd[1]: libpod-conmon-d941b0f96b5785c061415a66b3598589f4bec051d35025ad9672d50343bb3c93.scope: Deactivated successfully.
Jan 21 11:05:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:05 np0005590810 python3[86030]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:05 np0005590810 podman[86031]: 2026-01-21 16:05:05.666420276 +0000 UTC m=+0.040649972 container create dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345 (image=quay.io/ceph/ceph:v19, name=admiring_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:05 np0005590810 systemd[1]: Started libpod-conmon-dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345.scope.
Jan 21 11:05:05 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:05 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf36b67e9900113fa1278a628fb50788fb30936d823d0dde628a45790b920f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:05 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf36b67e9900113fa1278a628fb50788fb30936d823d0dde628a45790b920f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:05 np0005590810 podman[86031]: 2026-01-21 16:05:05.741700661 +0000 UTC m=+0.115930357 container init dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345 (image=quay.io/ceph/ceph:v19, name=admiring_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 21 11:05:05 np0005590810 podman[86031]: 2026-01-21 16:05:05.646387464 +0000 UTC m=+0.020617180 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:05 np0005590810 podman[86031]: 2026-01-21 16:05:05.747293885 +0000 UTC m=+0.121523581 container start dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345 (image=quay.io/ceph/ceph:v19, name=admiring_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 21 11:05:05 np0005590810 podman[86031]: 2026-01-21 16:05:05.750653399 +0000 UTC m=+0.124883095 container attach dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345 (image=quay.io/ceph/ceph:v19, name=admiring_payne, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 21 11:05:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 21 11:05:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/553856798' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 21 11:05:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 21 11:05:06 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1692143545' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 21 11:05:06 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/553856798' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 21 11:05:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/553856798' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 21 11:05:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Jan 21 11:05:06 np0005590810 admiring_payne[86047]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 21 11:05:06 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 21 11:05:06 np0005590810 systemd[1]: libpod-dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345.scope: Deactivated successfully.
Jan 21 11:05:06 np0005590810 podman[86031]: 2026-01-21 16:05:06.289742605 +0000 UTC m=+0.663972301 container died dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345 (image=quay.io/ceph/ceph:v19, name=admiring_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:06 np0005590810 systemd[1]: var-lib-containers-storage-overlay-68cf36b67e9900113fa1278a628fb50788fb30936d823d0dde628a45790b920f-merged.mount: Deactivated successfully.
Jan 21 11:05:06 np0005590810 podman[86031]: 2026-01-21 16:05:06.325360721 +0000 UTC m=+0.699590417 container remove dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345 (image=quay.io/ceph/ceph:v19, name=admiring_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:06 np0005590810 systemd[1]: libpod-conmon-dacedf93479aa46304713f89d273ebc198a00829dada6ee11175fa75bb02a345.scope: Deactivated successfully.
Jan 21 11:05:06 np0005590810 python3[86108]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:06 np0005590810 podman[86109]: 2026-01-21 16:05:06.67215363 +0000 UTC m=+0.047318939 container create 2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383 (image=quay.io/ceph/ceph:v19, name=musing_swartz, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 11:05:06 np0005590810 systemd[1]: Started libpod-conmon-2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383.scope.
Jan 21 11:05:06 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7203256ac904955511a150dd7dbfcaf6a5006e715ff3a0c3bb38b36495ea2d51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7203256ac904955511a150dd7dbfcaf6a5006e715ff3a0c3bb38b36495ea2d51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:06 np0005590810 podman[86109]: 2026-01-21 16:05:06.737363477 +0000 UTC m=+0.112528816 container init 2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383 (image=quay.io/ceph/ceph:v19, name=musing_swartz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:06 np0005590810 podman[86109]: 2026-01-21 16:05:06.742199601 +0000 UTC m=+0.117364910 container start 2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383 (image=quay.io/ceph/ceph:v19, name=musing_swartz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:06 np0005590810 podman[86109]: 2026-01-21 16:05:06.651050055 +0000 UTC m=+0.026215414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:06 np0005590810 podman[86109]: 2026-01-21 16:05:06.74690736 +0000 UTC m=+0.122072699 container attach 2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383 (image=quay.io/ceph/ceph:v19, name=musing_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2667286320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/553856798' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2667286320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2667286320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Jan 21 11:05:07 np0005590810 musing_swartz[86125]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 21 11:05:07 np0005590810 systemd[1]: libpod-2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383.scope: Deactivated successfully.
Jan 21 11:05:07 np0005590810 podman[86109]: 2026-01-21 16:05:07.303586191 +0000 UTC m=+0.678751500 container died 2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383 (image=quay.io/ceph/ceph:v19, name=musing_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:07 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7203256ac904955511a150dd7dbfcaf6a5006e715ff3a0c3bb38b36495ea2d51-merged.mount: Deactivated successfully.
Jan 21 11:05:07 np0005590810 podman[86109]: 2026-01-21 16:05:07.33681154 +0000 UTC m=+0.711976859 container remove 2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383 (image=quay.io/ceph/ceph:v19, name=musing_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:07 np0005590810 systemd[1]: libpod-conmon-2d9c35a2e68c7d686efea06a13af47cd0dd35b32cb74695b84813c7b71e1f383.scope: Deactivated successfully.
Jan 21 11:05:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 11:05:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:08 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Jan 21 11:05:08 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2667286320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 21 11:05:08 np0005590810 ceph-mon[74380]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 11:05:08 np0005590810 python3[86236]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 11:05:08 np0005590810 python3[86307]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769011508.0557718-37365-197370062729187/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:05:09 np0005590810 python3[86409]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 11:05:09 np0005590810 ceph-mon[74380]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Jan 21 11:05:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:09 np0005590810 python3[86484]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769011508.922723-37379-214186624072517/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=d0de6ad685c52dbb5e8b6e54efed65cacbf63147 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:05:09 np0005590810 python3[86534]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:10 np0005590810 podman[86535]: 2026-01-21 16:05:10.0147794 +0000 UTC m=+0.052667451 container create 6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348 (image=quay.io/ceph/ceph:v19, name=xenodochial_jackson, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:05:10 np0005590810 systemd[1]: Started libpod-conmon-6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348.scope.
Jan 21 11:05:10 np0005590810 podman[86535]: 2026-01-21 16:05:09.990859196 +0000 UTC m=+0.028747337 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:10 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:10 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0096cc45a064b9e4e1ed7fa385ae395186d857611a0c3f057f5e866fb0f23462/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:10 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0096cc45a064b9e4e1ed7fa385ae395186d857611a0c3f057f5e866fb0f23462/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:10 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0096cc45a064b9e4e1ed7fa385ae395186d857611a0c3f057f5e866fb0f23462/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:10 np0005590810 podman[86535]: 2026-01-21 16:05:10.107349386 +0000 UTC m=+0.145237457 container init 6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348 (image=quay.io/ceph/ceph:v19, name=xenodochial_jackson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:05:10 np0005590810 podman[86535]: 2026-01-21 16:05:10.113934229 +0000 UTC m=+0.151822280 container start 6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348 (image=quay.io/ceph/ceph:v19, name=xenodochial_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 11:05:10 np0005590810 podman[86535]: 2026-01-21 16:05:10.117388837 +0000 UTC m=+0.155276888 container attach 6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348 (image=quay.io/ceph/ceph:v19, name=xenodochial_jackson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 21 11:05:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 11:05:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1305951814' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 11:05:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1305951814' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 11:05:10 np0005590810 xenodochial_jackson[86550]: 
Jan 21 11:05:10 np0005590810 xenodochial_jackson[86550]: [global]
Jan 21 11:05:10 np0005590810 xenodochial_jackson[86550]: #011fsid = d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:05:10 np0005590810 xenodochial_jackson[86550]: #011mon_host = 192.168.122.100
Jan 21 11:05:10 np0005590810 systemd[1]: libpod-6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348.scope: Deactivated successfully.
Jan 21 11:05:10 np0005590810 podman[86535]: 2026-01-21 16:05:10.486866435 +0000 UTC m=+0.524754496 container died 6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348 (image=quay.io/ceph/ceph:v19, name=xenodochial_jackson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:05:10 np0005590810 systemd[1]: var-lib-containers-storage-overlay-0096cc45a064b9e4e1ed7fa385ae395186d857611a0c3f057f5e866fb0f23462-merged.mount: Deactivated successfully.
Jan 21 11:05:10 np0005590810 podman[86535]: 2026-01-21 16:05:10.520418275 +0000 UTC m=+0.558306326 container remove 6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348 (image=quay.io/ceph/ceph:v19, name=xenodochial_jackson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:10 np0005590810 systemd[1]: libpod-conmon-6f68182756552d782706625a19b00d64c2a3d33082628c62112c48ba4b74e348.scope: Deactivated successfully.
Jan 21 11:05:10 np0005590810 python3[86613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:10 np0005590810 podman[86614]: 2026-01-21 16:05:10.846776798 +0000 UTC m=+0.039365199 container create c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96 (image=quay.io/ceph/ceph:v19, name=thirsty_leakey, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:10 np0005590810 systemd[1]: Started libpod-conmon-c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96.scope.
Jan 21 11:05:10 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:10 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71270c938078fcf2cb52ad1110c93011881430d02f09d0758795b25a3c7cb2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:10 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71270c938078fcf2cb52ad1110c93011881430d02f09d0758795b25a3c7cb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:10 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71270c938078fcf2cb52ad1110c93011881430d02f09d0758795b25a3c7cb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:10 np0005590810 podman[86614]: 2026-01-21 16:05:10.830334919 +0000 UTC m=+0.022923340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:10 np0005590810 podman[86614]: 2026-01-21 16:05:10.927084087 +0000 UTC m=+0.119672508 container init c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96 (image=quay.io/ceph/ceph:v19, name=thirsty_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:05:10 np0005590810 podman[86614]: 2026-01-21 16:05:10.932959407 +0000 UTC m=+0.125547808 container start c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96 (image=quay.io/ceph/ceph:v19, name=thirsty_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:10 np0005590810 podman[86614]: 2026-01-21 16:05:10.936082883 +0000 UTC m=+0.128671314 container attach c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96 (image=quay.io/ceph/ceph:v19, name=thirsty_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:11 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1305951814' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 11:05:11 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1305951814' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 11:05:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 21 11:05:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2531399288' entity='client.admin' 
Jan 21 11:05:11 np0005590810 thirsty_leakey[86629]: set ssl_option
Jan 21 11:05:11 np0005590810 systemd[1]: libpod-c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96.scope: Deactivated successfully.
Jan 21 11:05:11 np0005590810 podman[86614]: 2026-01-21 16:05:11.398344695 +0000 UTC m=+0.590933146 container died c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96 (image=quay.io/ceph/ceph:v19, name=thirsty_leakey, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:11 np0005590810 systemd[1]: var-lib-containers-storage-overlay-0f71270c938078fcf2cb52ad1110c93011881430d02f09d0758795b25a3c7cb2-merged.mount: Deactivated successfully.
Jan 21 11:05:11 np0005590810 podman[86614]: 2026-01-21 16:05:11.430153425 +0000 UTC m=+0.622741826 container remove c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96 (image=quay.io/ceph/ceph:v19, name=thirsty_leakey, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:11 np0005590810 systemd[1]: libpod-conmon-c0aa65c92ea5367f04ecfdc80026b915f1aa47c5f81838dcca649d4721aeda96.scope: Deactivated successfully.
Jan 21 11:05:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:11 np0005590810 python3[86691]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:11 np0005590810 podman[86692]: 2026-01-21 16:05:11.780376459 +0000 UTC m=+0.040299390 container create 68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567 (image=quay.io/ceph/ceph:v19, name=charming_williams, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:11 np0005590810 systemd[1]: Started libpod-conmon-68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567.scope.
Jan 21 11:05:11 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:11 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c8bcbd6b8db333b0b19009b4fdec00a069f484a9d2846eaf7235f198a26d8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:11 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c8bcbd6b8db333b0b19009b4fdec00a069f484a9d2846eaf7235f198a26d8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:11 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c8bcbd6b8db333b0b19009b4fdec00a069f484a9d2846eaf7235f198a26d8b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:11 np0005590810 podman[86692]: 2026-01-21 16:05:11.839457517 +0000 UTC m=+0.099380458 container init 68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567 (image=quay.io/ceph/ceph:v19, name=charming_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:05:11 np0005590810 podman[86692]: 2026-01-21 16:05:11.845249494 +0000 UTC m=+0.105172435 container start 68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567 (image=quay.io/ceph/ceph:v19, name=charming_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:05:11 np0005590810 podman[86692]: 2026-01-21 16:05:11.848212755 +0000 UTC m=+0.108135696 container attach 68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567 (image=quay.io/ceph/ceph:v19, name=charming_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:11 np0005590810 podman[86692]: 2026-01-21 16:05:11.763750644 +0000 UTC m=+0.023673615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:12 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14223 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:12 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 11:05:12 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 11:05:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 11:05:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:12 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 21 11:05:12 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 21 11:05:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 21 11:05:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:12 np0005590810 charming_williams[86708]: Scheduled rgw.rgw update...
Jan 21 11:05:12 np0005590810 charming_williams[86708]: Scheduled ingress.rgw.default update...
Jan 21 11:05:12 np0005590810 systemd[1]: libpod-68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567.scope: Deactivated successfully.
Jan 21 11:05:12 np0005590810 podman[86692]: 2026-01-21 16:05:12.23086966 +0000 UTC m=+0.490792601 container died 68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567 (image=quay.io/ceph/ceph:v19, name=charming_williams, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:05:12 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c6c8bcbd6b8db333b0b19009b4fdec00a069f484a9d2846eaf7235f198a26d8b-merged.mount: Deactivated successfully.
Jan 21 11:05:12 np0005590810 podman[86692]: 2026-01-21 16:05:12.266695779 +0000 UTC m=+0.526618720 container remove 68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567 (image=quay.io/ceph/ceph:v19, name=charming_williams, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:05:12 np0005590810 systemd[1]: libpod-conmon-68a34bdf987e44552de3cc0a1982b6ac42c18f4041419765e62f5a91912ea567.scope: Deactivated successfully.
Jan 21 11:05:12 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2531399288' entity='client.admin' 
Jan 21 11:05:12 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:12 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:12 np0005590810 python3[86821]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 11:05:12 np0005590810 python3[86892]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769011512.422406-37398-147599790629312/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:05:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:13 np0005590810 ceph-mon[74380]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 11:05:13 np0005590810 ceph-mon[74380]: Saving service ingress.rgw.default spec with placement count:2
Jan 21 11:05:14 np0005590810 python3[86942]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:14 np0005590810 podman[86943]: 2026-01-21 16:05:14.18917433 +0000 UTC m=+0.050269710 container create 71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2 (image=quay.io/ceph/ceph:v19, name=practical_vaughan, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:14 np0005590810 systemd[1]: Started libpod-conmon-71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2.scope.
Jan 21 11:05:14 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a20e94f860db538d5d96154e3d955bd91c274d48ab7ce57c453d3b6f8c2afe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a20e94f860db538d5d96154e3d955bd91c274d48ab7ce57c453d3b6f8c2afe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a20e94f860db538d5d96154e3d955bd91c274d48ab7ce57c453d3b6f8c2afe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:14 np0005590810 podman[86943]: 2026-01-21 16:05:14.24507354 +0000 UTC m=+0.106168950 container init 71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2 (image=quay.io/ceph/ceph:v19, name=practical_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:05:14 np0005590810 podman[86943]: 2026-01-21 16:05:14.250265876 +0000 UTC m=+0.111361256 container start 71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2 (image=quay.io/ceph/ceph:v19, name=practical_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:05:14 np0005590810 podman[86943]: 2026-01-21 16:05:14.256718906 +0000 UTC m=+0.117814316 container attach 71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2 (image=quay.io/ceph/ceph:v19, name=practical_vaughan, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:14 np0005590810 podman[86943]: 2026-01-21 16:05:14.168431405 +0000 UTC m=+0.029526815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service node-exporter spec with placement *
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Jan 21 11:05:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 21 11:05:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Jan 21 11:05:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 21 11:05:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Jan 21 11:05:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 21 11:05:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Jan 21 11:05:14 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Jan 21 11:05:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 21 11:05:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:14 np0005590810 practical_vaughan[86958]: Scheduled node-exporter update...
Jan 21 11:05:14 np0005590810 practical_vaughan[86958]: Scheduled grafana update...
Jan 21 11:05:14 np0005590810 practical_vaughan[86958]: Scheduled prometheus update...
Jan 21 11:05:14 np0005590810 practical_vaughan[86958]: Scheduled alertmanager update...
Jan 21 11:05:14 np0005590810 systemd[1]: libpod-71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2.scope: Deactivated successfully.
Jan 21 11:05:14 np0005590810 podman[86983]: 2026-01-21 16:05:14.697335421 +0000 UTC m=+0.025644283 container died 71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2 (image=quay.io/ceph/ceph:v19, name=practical_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:14 np0005590810 systemd[1]: var-lib-containers-storage-overlay-33a20e94f860db538d5d96154e3d955bd91c274d48ab7ce57c453d3b6f8c2afe-merged.mount: Deactivated successfully.
Jan 21 11:05:14 np0005590810 podman[86983]: 2026-01-21 16:05:14.730287732 +0000 UTC m=+0.058596584 container remove 71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2 (image=quay.io/ceph/ceph:v19, name=practical_vaughan, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:05:14 np0005590810 systemd[1]: libpod-conmon-71449aaf787fdbfc5c428f55146cc9b28dd68f674f10f3f47fd7e56364dccae2.scope: Deactivated successfully.
Jan 21 11:05:15 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:15 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:15 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:15 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:15 np0005590810 python3[87024]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:15 np0005590810 podman[87025]: 2026-01-21 16:05:15.291253537 +0000 UTC m=+0.042471944 container create ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:15 np0005590810 systemd[1]: Started libpod-conmon-ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65.scope.
Jan 21 11:05:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985927ea00edf34ab65c7d7ddb288e40d7f755ef47ac4f1247b992ea21d3ea03/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985927ea00edf34ab65c7d7ddb288e40d7f755ef47ac4f1247b992ea21d3ea03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985927ea00edf34ab65c7d7ddb288e40d7f755ef47ac4f1247b992ea21d3ea03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:15 np0005590810 podman[87025]: 2026-01-21 16:05:15.35519078 +0000 UTC m=+0.106409227 container init ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:05:15 np0005590810 podman[87025]: 2026-01-21 16:05:15.360597894 +0000 UTC m=+0.111816301 container start ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:15 np0005590810 podman[87025]: 2026-01-21 16:05:15.363873425 +0000 UTC m=+0.115091842 container attach ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:05:15 np0005590810 podman[87025]: 2026-01-21 16:05:15.271085442 +0000 UTC m=+0.022303869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Jan 21 11:05:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3614795954' entity='client.admin' 
Jan 21 11:05:15 np0005590810 systemd[1]: libpod-ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65.scope: Deactivated successfully.
Jan 21 11:05:15 np0005590810 conmon[87040]: conmon ebb6e0000a27f321cf7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65.scope/container/memory.events
Jan 21 11:05:15 np0005590810 podman[87025]: 2026-01-21 16:05:15.775874839 +0000 UTC m=+0.527093246 container died ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 21 11:05:15 np0005590810 systemd[1]: var-lib-containers-storage-overlay-985927ea00edf34ab65c7d7ddb288e40d7f755ef47ac4f1247b992ea21d3ea03-merged.mount: Deactivated successfully.
Jan 21 11:05:15 np0005590810 podman[87025]: 2026-01-21 16:05:15.817869666 +0000 UTC m=+0.569088073 container remove ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 21 11:05:15 np0005590810 systemd[1]: libpod-conmon-ebb6e0000a27f321cf7ae87221a66017a9071fac0858b88d5a7f27238deb4b65.scope: Deactivated successfully.
Jan 21 11:05:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:05:15
Jan 21 11:05:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:05:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:05:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes']
Jan 21 11:05:15 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: Saving service node-exporter spec with placement *
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: Saving service grafana spec with placement compute-0;count:1
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: Saving service prometheus spec with placement compute-0;count:1
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: Saving service alertmanager spec with placement compute-0;count:1
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3614795954' entity='client.admin' 
Jan 21 11:05:16 np0005590810 python3[87102]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:16 np0005590810 podman[87103]: 2026-01-21 16:05:16.1669466 +0000 UTC m=+0.040013920 container create 1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685 (image=quay.io/ceph/ceph:v19, name=zealous_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 21 11:05:16 np0005590810 systemd[1]: Started libpod-conmon-1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685.scope.
Jan 21 11:05:16 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddce54d35ed8d0926759baf300f5d01826d009d2c836c67303aeae446ad54d46/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddce54d35ed8d0926759baf300f5d01826d009d2c836c67303aeae446ad54d46/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddce54d35ed8d0926759baf300f5d01826d009d2c836c67303aeae446ad54d46/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:16 np0005590810 podman[87103]: 2026-01-21 16:05:16.237709766 +0000 UTC m=+0.110777146 container init 1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685 (image=quay.io/ceph/ceph:v19, name=zealous_spence, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 21 11:05:16 np0005590810 podman[87103]: 2026-01-21 16:05:16.148768563 +0000 UTC m=+0.021835893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:16 np0005590810 podman[87103]: 2026-01-21 16:05:16.246159143 +0000 UTC m=+0.119226473 container start 1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685 (image=quay.io/ceph/ceph:v19, name=zealous_spence, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:05:16 np0005590810 podman[87103]: 2026-01-21 16:05:16.249971243 +0000 UTC m=+0.123038573 container attach 1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685 (image=quay.io/ceph/ceph:v19, name=zealous_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:05:16 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Jan 21 11:05:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/912917705' entity='client.admin' 
Jan 21 11:05:16 np0005590810 systemd[1]: libpod-1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685.scope: Deactivated successfully.
Jan 21 11:05:16 np0005590810 podman[87103]: 2026-01-21 16:05:16.631143818 +0000 UTC m=+0.504211168 container died 1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685 (image=quay.io/ceph/ceph:v19, name=zealous_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 21 11:05:16 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ddce54d35ed8d0926759baf300f5d01826d009d2c836c67303aeae446ad54d46-merged.mount: Deactivated successfully.
Jan 21 11:05:16 np0005590810 podman[87103]: 2026-01-21 16:05:16.673848759 +0000 UTC m=+0.546916089 container remove 1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685 (image=quay.io/ceph/ceph:v19, name=zealous_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:05:16 np0005590810 systemd[1]: libpod-conmon-1c511ba7c2ff48c41837e3e3c2c9c7c9f16c864fd96e9e0c8516d5510b673685.scope: Deactivated successfully.
Jan 21 11:05:16 np0005590810 python3[87180]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:17 np0005590810 podman[87181]: 2026-01-21 16:05:17.03458439 +0000 UTC m=+0.044948229 container create fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524 (image=quay.io/ceph/ceph:v19, name=festive_elgamal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 21 11:05:17 np0005590810 systemd[1]: Started libpod-conmon-fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524.scope.
Jan 21 11:05:17 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 83cc3dbc-bc37-4587-be9c-3fc920f00d18 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/912917705' entity='client.admin' 
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:17 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4c7e4c65d8ccae1eb94d0b97090c20ec5e2468cb2351b63a57ff5dc96dc9a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4c7e4c65d8ccae1eb94d0b97090c20ec5e2468cb2351b63a57ff5dc96dc9a7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4c7e4c65d8ccae1eb94d0b97090c20ec5e2468cb2351b63a57ff5dc96dc9a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:17 np0005590810 podman[87181]: 2026-01-21 16:05:17.109421124 +0000 UTC m=+0.119784973 container init fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524 (image=quay.io/ceph/ceph:v19, name=festive_elgamal, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 21 11:05:17 np0005590810 podman[87181]: 2026-01-21 16:05:17.014578991 +0000 UTC m=+0.024942850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:17 np0005590810 podman[87181]: 2026-01-21 16:05:17.115435379 +0000 UTC m=+0.125799218 container start fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524 (image=quay.io/ceph/ceph:v19, name=festive_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:17 np0005590810 podman[87181]: 2026-01-21 16:05:17.118335557 +0000 UTC m=+0.128699396 container attach fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524 (image=quay.io/ceph/ceph:v19, name=festive_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2255539159' entity='client.admin' 
Jan 21 11:05:17 np0005590810 systemd[1]: libpod-fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524.scope: Deactivated successfully.
Jan 21 11:05:17 np0005590810 podman[87181]: 2026-01-21 16:05:17.501811781 +0000 UTC m=+0.512175620 container died fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524 (image=quay.io/ceph/ceph:v19, name=festive_elgamal, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:05:17 np0005590810 systemd[1]: var-lib-containers-storage-overlay-4a4c7e4c65d8ccae1eb94d0b97090c20ec5e2468cb2351b63a57ff5dc96dc9a7-merged.mount: Deactivated successfully.
Jan 21 11:05:17 np0005590810 podman[87181]: 2026-01-21 16:05:17.539384357 +0000 UTC m=+0.549748206 container remove fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524 (image=quay.io/ceph/ceph:v19, name=festive_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 21 11:05:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:17 np0005590810 systemd[1]: libpod-conmon-fe8cdadb32500e57122c058388d95e37cbf84672e81849ccf8e44c838880c524.scope: Deactivated successfully.
Jan 21 11:05:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Jan 21 11:05:18 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 491915ee-6c03-4e45-8ca0-de572b868fe9 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2255539159' entity='client.admin' 
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:05:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:18 np0005590810 python3[87256]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:18 np0005590810 python3[87293]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.ygffhs/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:18 np0005590810 podman[87294]: 2026-01-21 16:05:18.6975325 +0000 UTC m=+0.043752238 container create 67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17 (image=quay.io/ceph/ceph:v19, name=epic_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:18 np0005590810 systemd[1]: Started libpod-conmon-67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17.scope.
Jan 21 11:05:18 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b7097cf27dba9fafe37bb7a656718281802d7952ad5acb3234dc76bc3d6125/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b7097cf27dba9fafe37bb7a656718281802d7952ad5acb3234dc76bc3d6125/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b7097cf27dba9fafe37bb7a656718281802d7952ad5acb3234dc76bc3d6125/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:18 np0005590810 podman[87294]: 2026-01-21 16:05:18.766818196 +0000 UTC m=+0.113037954 container init 67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17 (image=quay.io/ceph/ceph:v19, name=epic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:05:18 np0005590810 podman[87294]: 2026-01-21 16:05:18.771297418 +0000 UTC m=+0.117517146 container start 67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17 (image=quay.io/ceph/ceph:v19, name=epic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:05:18 np0005590810 podman[87294]: 2026-01-21 16:05:18.773847164 +0000 UTC m=+0.120066932 container attach 67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17 (image=quay.io/ceph/ceph:v19, name=epic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:18 np0005590810 podman[87294]: 2026-01-21 16:05:18.680556903 +0000 UTC m=+0.026776661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.ygffhs/server_addr}] v 0)
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v82: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/472381783' entity='client.admin' 
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Jan 21 11:05:19 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 54ffe3d2-727a-4ebc-ab53-9f461829242e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:05:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:19 np0005590810 systemd[1]: libpod-67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17.scope: Deactivated successfully.
Jan 21 11:05:19 np0005590810 podman[87294]: 2026-01-21 16:05:19.68697577 +0000 UTC m=+1.033195508 container died 67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17 (image=quay.io/ceph/ceph:v19, name=epic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:05:19 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d9b7097cf27dba9fafe37bb7a656718281802d7952ad5acb3234dc76bc3d6125-merged.mount: Deactivated successfully.
Jan 21 11:05:19 np0005590810 podman[87294]: 2026-01-21 16:05:19.721582426 +0000 UTC m=+1.067802154 container remove 67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17 (image=quay.io/ceph/ceph:v19, name=epic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:05:19 np0005590810 systemd[1]: libpod-conmon-67948545aeb0671c6c21be8cef8af55b43acaf873ea324e393fcb06e958e1e17.scope: Deactivated successfully.
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/472381783' entity='client.admin' 
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:20 np0005590810 python3[87370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard//server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:20 np0005590810 podman[87371]: 2026-01-21 16:05:20.643946035 +0000 UTC m=+0.044900327 container create d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42 (image=quay.io/ceph/ceph:v19, name=musing_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Jan 21 11:05:20 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 49246276-48ce-4bab-be52-02f4723b6d35 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:05:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:20 np0005590810 systemd[1]: Started libpod-conmon-d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42.scope.
Jan 21 11:05:20 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a34e02fe7e3fa9ddd7035df40c0e9760e5878d3da812a9cbd70ba3a4e83085a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a34e02fe7e3fa9ddd7035df40c0e9760e5878d3da812a9cbd70ba3a4e83085a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a34e02fe7e3fa9ddd7035df40c0e9760e5878d3da812a9cbd70ba3a4e83085a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:20 np0005590810 podman[87371]: 2026-01-21 16:05:20.624104351 +0000 UTC m=+0.025058663 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:20 np0005590810 podman[87371]: 2026-01-21 16:05:20.720451136 +0000 UTC m=+0.121405448 container init d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42 (image=quay.io/ceph/ceph:v19, name=musing_mayer, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 21 11:05:20 np0005590810 podman[87371]: 2026-01-21 16:05:20.726470761 +0000 UTC m=+0.127425043 container start d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42 (image=quay.io/ceph/ceph:v19, name=musing_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 11:05:20 np0005590810 podman[87371]: 2026-01-21 16:05:20.729175462 +0000 UTC m=+0.130129774 container attach d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42 (image=quay.io/ceph/ceph:v19, name=musing_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard//server_addr}] v 0)
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2590062758' entity='client.admin' 
Jan 21 11:05:21 np0005590810 systemd[1]: libpod-d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42.scope: Deactivated successfully.
Jan 21 11:05:21 np0005590810 podman[87371]: 2026-01-21 16:05:21.097539172 +0000 UTC m=+0.498493474 container died d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42 (image=quay.io/ceph/ceph:v19, name=musing_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:05:21 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3a34e02fe7e3fa9ddd7035df40c0e9760e5878d3da812a9cbd70ba3a4e83085a-merged.mount: Deactivated successfully.
Jan 21 11:05:21 np0005590810 podman[87371]: 2026-01-21 16:05:21.128690311 +0000 UTC m=+0.529644593 container remove d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42 (image=quay.io/ceph/ceph:v19, name=musing_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:05:21 np0005590810 systemd[1]: libpod-conmon-d85c6cd3da490a40aa83fec0706a52f5a916d7046dd8c2151f098e962aebaa42.scope: Deactivated successfully.
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2590062758' entity='client.admin' 
Jan 21 11:05:21 np0005590810 ceph-mgr[74671]: [progress WARNING root] Starting Global Recovery Event,63 pgs not in active + clean state
Jan 21 11:05:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Jan 21 11:05:21 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 0353302f-3760-4f06-9ee6-73a0d75f40b1 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:05:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Jan 21 11:05:22 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 8b051e7c-7211-44f2-b6c4-8a0dbf898cfe (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 83cc3dbc-bc37-4587-be9c-3fc920f00d18 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 83cc3dbc-bc37-4587-be9c-3fc920f00d18 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 491915ee-6c03-4e45-8ca0-de572b868fe9 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 491915ee-6c03-4e45-8ca0-de572b868fe9 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 54ffe3d2-727a-4ebc-ab53-9f461829242e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 54ffe3d2-727a-4ebc-ab53-9f461829242e (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 49246276-48ce-4bab-be52-02f4723b6d35 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 49246276-48ce-4bab-be52-02f4723b6d35 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 0353302f-3760-4f06-9ee6-73a0d75f40b1 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 0353302f-3760-4f06-9ee6-73a0d75f40b1 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 8b051e7c-7211-44f2-b6c4-8a0dbf898cfe (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 21 11:05:22 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 8b051e7c-7211-44f2-b6c4-8a0dbf898cfe (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 30 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=30 pruub=14.396924019s) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active pruub 55.407291412s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 31 pg[5.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=8.412352562s) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active pruub 49.422737122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 31 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=31 pruub=15.411546707s) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active pruub 56.421939850s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=8.412352562s) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown pruub 49.422737122s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=30 pruub=14.396924019s) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown pruub 55.407291412s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=31 pruub=15.411546707s) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown pruub 56.421939850s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.13( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.14( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.15( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.16( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.17( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.18( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.19( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.1b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.1c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.1d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.1e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.3( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.5( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.6( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.1f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.11( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.12( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.1( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.9( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.10( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.7( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.8( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[5.2( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.12( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.13( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.14( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.16( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.17( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.18( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.2( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.1( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.3( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.4( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.19( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.1b( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.1c( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.1e( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.d( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.f( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.10( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.1f( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.a( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.b( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.c( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.5( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.6( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.7( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[3.8( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.10( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.11( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.12( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.13( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.14( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.16( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.17( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.2( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.3( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.4( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.5( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.18( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.19( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.1a( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.1b( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.1c( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.1d( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.c( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.d( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.e( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.f( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.1e( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.a( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.b( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.6( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.7( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=16/17 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:05:23 np0005590810 python3[87448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard//server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v88: 131 pgs: 96 peering, 35 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:23 np0005590810 podman[87449]: 2026-01-21 16:05:23.566793188 +0000 UTC m=+0.042487706 container create cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:05:23 np0005590810 systemd[75652]: Starting Mark boot as successful...
Jan 21 11:05:23 np0005590810 systemd[75652]: Finished Mark boot as successful.
Jan 21 11:05:23 np0005590810 systemd[1]: Started libpod-conmon-cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0.scope.
Jan 21 11:05:23 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ea3f9725e2e5c26d0bd8e4452050a4d963e71bfc510dfbeda1727c5ba193ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ea3f9725e2e5c26d0bd8e4452050a4d963e71bfc510dfbeda1727c5ba193ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ea3f9725e2e5c26d0bd8e4452050a4d963e71bfc510dfbeda1727c5ba193ba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:23 np0005590810 podman[87449]: 2026-01-21 16:05:23.620265865 +0000 UTC m=+0.095960433 container init cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 21 11:05:23 np0005590810 podman[87449]: 2026-01-21 16:05:23.626070283 +0000 UTC m=+0.101764801 container start cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:05:23 np0005590810 podman[87449]: 2026-01-21 16:05:23.629413246 +0000 UTC m=+0.105107814 container attach cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:23 np0005590810 podman[87449]: 2026-01-21 16:05:23.550121301 +0000 UTC m=+0.025815839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=8.578537941s) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active pruub 50.591869354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.19( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.19( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=8.578537941s) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown pruub 50.591869354s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.1f( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.18( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.1e( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.1c( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.1d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.1d( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.8( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.4( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.3( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.3( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.5( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.2( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.6( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.1( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.5( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.3( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.0( empty local-lis/les=31/33 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.1( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.6( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.7( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.0( empty local-lis/les=31/33 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=30/33 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.a( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.6( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.c( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.b( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.b( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.d( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.c( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.9( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.f( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.16( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.16( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.10( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.17( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.17( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.12( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.14( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.13( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.14( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.16( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.10( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.17( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.18( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[3.19( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.1e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.11( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[5.1f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 33 pg[4.1e( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [0] r=0 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard//server_addr}] v 0)
Jan 21 11:05:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3653537297' entity='client.admin' 
Jan 21 11:05:23 np0005590810 systemd[1]: libpod-cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0.scope: Deactivated successfully.
Jan 21 11:05:23 np0005590810 podman[87449]: 2026-01-21 16:05:23.988610354 +0000 UTC m=+0.464304882 container died cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:05:24 np0005590810 systemd[1]: var-lib-containers-storage-overlay-14ea3f9725e2e5c26d0bd8e4452050a4d963e71bfc510dfbeda1727c5ba193ba-merged.mount: Deactivated successfully.
Jan 21 11:05:24 np0005590810 podman[87449]: 2026-01-21 16:05:24.028390146 +0000 UTC m=+0.504084664 container remove cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:24 np0005590810 systemd[1]: libpod-conmon-cf4141c0fbae0dc24356d5440bd5866230c0cab21dfd066439ac10ad966aafc0.scope: Deactivated successfully.
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 21 11:05:24 np0005590810 python3[87528]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3653537297' entity='client.admin' 
Jan 21 11:05:24 np0005590810 podman[87529]: 2026-01-21 16:05:24.365210685 +0000 UTC m=+0.026454201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:24 np0005590810 podman[87529]: 2026-01-21 16:05:24.461422796 +0000 UTC m=+0.122666292 container create a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08 (image=quay.io/ceph/ceph:v19, name=goofy_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:05:24 np0005590810 systemd[1]: Started libpod-conmon-a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08.scope.
Jan 21 11:05:24 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:24 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa095f9369505e3725c8dd4c5866455a51de7c7fd997ce027651f15a28b4351/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:24 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa095f9369505e3725c8dd4c5866455a51de7c7fd997ce027651f15a28b4351/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:24 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa095f9369505e3725c8dd4c5866455a51de7c7fd997ce027651f15a28b4351/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:24 np0005590810 podman[87529]: 2026-01-21 16:05:24.591246727 +0000 UTC m=+0.252490243 container init a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08 (image=quay.io/ceph/ceph:v19, name=goofy_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:05:24 np0005590810 podman[87529]: 2026-01-21 16:05:24.597628674 +0000 UTC m=+0.258872170 container start a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08 (image=quay.io/ceph/ceph:v19, name=goofy_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 11:05:24 np0005590810 podman[87529]: 2026-01-21 16:05:24.601526567 +0000 UTC m=+0.262770063 container attach a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08 (image=quay.io/ceph/ceph:v19, name=goofy_ptolemy, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e34 e34: 2 total, 2 up, 2 in
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1a( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1b( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.18( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.19( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1e( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1f( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.c( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.d( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.6( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.7( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.4( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.3( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.2( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.5( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.f( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.e( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.9( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.8( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.b( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.a( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.15( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.14( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.17( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.16( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.11( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.10( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.13( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.12( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1d( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1c( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.0( empty local-lis/les=33/34 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.14( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.10( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.12( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 34 pg[6.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [0] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:05:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:05:24 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:05:24 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:05:25 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2879068237' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 21 11:05:25 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 21 11:05:25 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:25 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v91: 193 pgs: 62 unknown, 96 peering, 35 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2879068237' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2879068237' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 21 11:05:25 np0005590810 goofy_ptolemy[87544]: module 'dashboard' is already disabled
Jan 21 11:05:25 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.ygffhs(active, since 2m)
Jan 21 11:05:25 np0005590810 systemd[1]: libpod-a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08.scope: Deactivated successfully.
Jan 21 11:05:25 np0005590810 podman[87529]: 2026-01-21 16:05:25.877504045 +0000 UTC m=+1.538747551 container died a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08 (image=quay.io/ceph/ceph:v19, name=goofy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Jan 21 11:05:25 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dfa095f9369505e3725c8dd4c5866455a51de7c7fd997ce027651f15a28b4351-merged.mount: Deactivated successfully.
Jan 21 11:05:25 np0005590810 podman[87529]: 2026-01-21 16:05:25.914263054 +0000 UTC m=+1.575506550 container remove a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08 (image=quay.io/ceph/ceph:v19, name=goofy_ptolemy, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:05:25 np0005590810 systemd[1]: libpod-conmon-a6a3c101a687ca995f126598ee19eb10dc2c50233e0c6679ed67a14fe42dfe08.scope: Deactivated successfully.
Jan 21 11:05:25 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:25 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:25 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 21 11:05:26 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 21 11:05:26 np0005590810 python3[87607]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:26 np0005590810 podman[87608]: 2026-01-21 16:05:26.280367338 +0000 UTC m=+0.048872913 container create 2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43 (image=quay.io/ceph/ceph:v19, name=serene_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:26 np0005590810 systemd[1]: Started libpod-conmon-2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43.scope.
Jan 21 11:05:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3b110bb7ce62193737fc240b61687c8be514e911d005c4ba92d260bd73a59b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3b110bb7ce62193737fc240b61687c8be514e911d005c4ba92d260bd73a59b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3b110bb7ce62193737fc240b61687c8be514e911d005c4ba92d260bd73a59b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:26 np0005590810 podman[87608]: 2026-01-21 16:05:26.256035541 +0000 UTC m=+0.024541166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:26 np0005590810 podman[87608]: 2026-01-21 16:05:26.359529678 +0000 UTC m=+0.128035253 container init 2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43 (image=quay.io/ceph/ceph:v19, name=serene_hawking, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 11:05:26 np0005590810 podman[87608]: 2026-01-21 16:05:26.364646522 +0000 UTC m=+0.133152097 container start 2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43 (image=quay.io/ceph/ceph:v19, name=serene_hawking, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 11:05:26 np0005590810 podman[87608]: 2026-01-21 16:05:26.367635944 +0000 UTC m=+0.136141539 container attach 2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43 (image=quay.io/ceph/ceph:v19, name=serene_hawking, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:05:26 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 8 completed events
Jan 21 11:05:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:05:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:26 np0005590810 ceph-mon[74380]: Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:26 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2879068237' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 21 11:05:26 np0005590810 ceph-mon[74380]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:26 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:26 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:26 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 21 11:05:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239893090' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 21 11:05:26 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 21 11:05:26 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 62 unknown, 96 peering, 35 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 8d8c9313-63cc-4c65-a755-ae0b9ad2c135 (Updating mon deployment (+2 -> 3))
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3239893090' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: from='mgr.14120 192.168.122.100:0/3793240354' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239893090' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  1: '-n'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  2: 'mgr.compute-0.ygffhs'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  3: '-f'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  4: '--setuser'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  5: 'ceph'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  6: '--setgroup'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  7: 'ceph'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  8: '--default-log-to-file=false'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  9: '--default-log-to-journald=true'
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 21 11:05:27 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.ygffhs(active, since 2m)
Jan 21 11:05:27 np0005590810 systemd[1]: libpod-2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 podman[87608]: 2026-01-21 16:05:27.633504879 +0000 UTC m=+1.402010464 container died 2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43 (image=quay.io/ceph/ceph:v19, name=serene_hawking, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:05:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9e3b110bb7ce62193737fc240b61687c8be514e911d005c4ba92d260bd73a59b-merged.mount: Deactivated successfully.
Jan 21 11:05:27 np0005590810 podman[87608]: 2026-01-21 16:05:27.673051082 +0000 UTC m=+1.441556657 container remove 2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43 (image=quay.io/ceph/ceph:v19, name=serene_hawking, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:27 np0005590810 systemd[1]: libpod-conmon-2ea2ed3995006daab9e3e1dc55844aa9ebdc9fbbedb4e50115dad22c65dacb43.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 30 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd[1]: session-33.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-33.scope: Consumed 16.912s CPU time.
Jan 21 11:05:27 np0005590810 systemd[1]: session-23.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-27.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-24.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-30.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 24 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 33 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 27 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 23 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd[1]: session-28.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-31.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-32.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-21.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-26.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 30.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 28 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 32 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setuser ceph since I am not root
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 21 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setgroup ceph since I am not root
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 26 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 31 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 33.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 23.
Jan 21 11:05:27 np0005590810 systemd[1]: session-29.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd[1]: session-25.scope: Deactivated successfully.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 29 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Session 25 logged out. Waiting for processes to exit.
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: pidfile_write: ignore empty --pid-file
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 27.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 24.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 28.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 31.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 32.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 21.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 26.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 29.
Jan 21 11:05:27 np0005590810 systemd-logind[795]: Removed session 25.
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'alerts'
Jan 21 11:05:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:27.852+0000 7f026f78d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'balancer'
Jan 21 11:05:27 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 21 11:05:27 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 21 11:05:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:27.942+0000 7f026f78d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:05:27 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'cephadm'
Jan 21 11:05:28 np0005590810 python3[87707]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:28 np0005590810 podman[87708]: 2026-01-21 16:05:28.096465553 +0000 UTC m=+0.037115722 container create 1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f (image=quay.io/ceph/ceph:v19, name=gallant_euclid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:28 np0005590810 systemd[1]: Started libpod-conmon-1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f.scope.
Jan 21 11:05:28 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa8f3c8d4d4949ab14ac9cbbb9e81c24f8e7f2dbf38976bf1ecbcdb76ad1a94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa8f3c8d4d4949ab14ac9cbbb9e81c24f8e7f2dbf38976bf1ecbcdb76ad1a94/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa8f3c8d4d4949ab14ac9cbbb9e81c24f8e7f2dbf38976bf1ecbcdb76ad1a94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:28 np0005590810 podman[87708]: 2026-01-21 16:05:28.171650609 +0000 UTC m=+0.112300798 container init 1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f (image=quay.io/ceph/ceph:v19, name=gallant_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:05:28 np0005590810 podman[87708]: 2026-01-21 16:05:28.079445505 +0000 UTC m=+0.020095694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:28 np0005590810 podman[87708]: 2026-01-21 16:05:28.178215982 +0000 UTC m=+0.118866161 container start 1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f (image=quay.io/ceph/ceph:v19, name=gallant_euclid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:05:28 np0005590810 podman[87708]: 2026-01-21 16:05:28.181178773 +0000 UTC m=+0.121828952 container attach 1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f (image=quay.io/ceph/ceph:v19, name=gallant_euclid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:28 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 21 11:05:28 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 11:05:28 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3239893090' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 21 11:05:28 np0005590810 ceph-mon[74380]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 21 11:05:28 np0005590810 ceph-mon[74380]: Cluster is now healthy
Jan 21 11:05:28 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'crash'
Jan 21 11:05:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:28.777+0000 7f026f78d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:05:28 np0005590810 ceph-mgr[74671]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:05:28 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'dashboard'
Jan 21 11:05:28 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 21 11:05:28 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'devicehealth'
Jan 21 11:05:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:29.477+0000 7f026f78d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 11:05:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 11:05:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 11:05:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  from numpy import show_config as show_numpy_config
Jan 21 11:05:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:29.655+0000 7f026f78d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'influx'
Jan 21 11:05:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:29.735+0000 7f026f78d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'insights'
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'iostat'
Jan 21 11:05:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:29.881+0000 7f026f78d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:05:29 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'k8sevents'
Jan 21 11:05:29 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 21 11:05:29 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 21 11:05:30 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'localpool'
Jan 21 11:05:30 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 11:05:30 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mirroring'
Jan 21 11:05:30 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'nfs'
Jan 21 11:05:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 21 11:05:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 21 11:05:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:30.954+0000 7f026f78d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:05:30 np0005590810 ceph-mgr[74671]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:05:30 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'orchestrator'
Jan 21 11:05:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 21 11:05:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 21 11:05:31 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 21 11:05:31 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 21 11:05:31 np0005590810 ceph-mon[74380]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 21 11:05:31 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 21 11:05:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:05:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:31.197+0000 7f026f78d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 11:05:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:31.273+0000 7f026f78d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_support'
Jan 21 11:05:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:31.338+0000 7f026f78d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 11:05:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:31.418+0000 7f026f78d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'progress'
Jan 21 11:05:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:31.491+0000 7f026f78d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'prometheus'
Jan 21 11:05:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:31.872+0000 7f026f78d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rbd_support'
Jan 21 11:05:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:31.971+0000 7f026f78d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:05:31 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'restful'
Jan 21 11:05:32 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 21 11:05:32 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 21 11:05:32 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rgw'
Jan 21 11:05:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:32.427+0000 7f026f78d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:05:32 np0005590810 ceph-mgr[74671]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:05:32 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rook'
Jan 21 11:05:33 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 21 11:05:33 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 21 11:05:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:33.052+0000 7f026f78d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'selftest'
Jan 21 11:05:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:33.128+0000 7f026f78d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'snap_schedule'
Jan 21 11:05:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:33.212+0000 7f026f78d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'stats'
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'status'
Jan 21 11:05:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:33.371+0000 7f026f78d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telegraf'
Jan 21 11:05:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:33.450+0000 7f026f78d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telemetry'
Jan 21 11:05:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:33.648+0000 7f026f78d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 11:05:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:33.926+0000 7f026f78d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'volumes'
Jan 21 11:05:34 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 21 11:05:34 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 21 11:05:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:34.258+0000 7f026f78d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:05:34 np0005590810 ceph-mgr[74671]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:05:34 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'zabbix'
Jan 21 11:05:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:34.348+0000 7f026f78d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:05:34 np0005590810 ceph-mgr[74671]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:05:34 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x563f1f71cd00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 11:05:34 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x563f1f71cea0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 11:05:34 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 21 11:05:35 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 21 11:05:36 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 21 11:05:36 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : monmap epoch 2
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T16:05:31.015926+0000
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : created 2026-01-21T16:02:46.356140+0000
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.ygffhs(active, since 2m)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ygffhs restarted
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ygffhs
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e35 e35: 2 total, 2 up, 2 in
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map Activating!
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map I am now activating
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e35: 2 total, 2 up, 2 in
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.ygffhs(active, starting, since 0.0202292s)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Manager daemon compute-0.ygffhs is now available
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: balancer
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [balancer INFO root] Starting
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:05:36
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: cephadm
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: crash
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: dashboard
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: devicehealth
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0 calling monitor election
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-2 calling monitor election
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: iostat
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Starting
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: overall HEALTH_OK
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: Active manager daemon compute-0.ygffhs restarted
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: Activating manager daemon compute-0.ygffhs
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: Manager daemon compute-0.ygffhs is now available
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO sso] Loading SSO DB version=1
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: nfs
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: orchestrator
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: pg_autoscaler
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: progress
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [progress INFO root] Loading...
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f01f2b35be0>, <progress.module.GhostEvent object at 0x7f01f2b35e50>, <progress.module.GhostEvent object at 0x7f01f2b35e80>, <progress.module.GhostEvent object at 0x7f01f2b35eb0>, <progress.module.GhostEvent object at 0x7f01f2b35ee0>, <progress.module.GhostEvent object at 0x7f01f2b35f10>, <progress.module.GhostEvent object at 0x7f01f2b35f40>, <progress.module.GhostEvent object at 0x7f01f2b35f70>] historic events
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] recovery thread starting
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] starting setup
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: rbd_support
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: restful
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: status
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [restful INFO root] server_addr: :: server_port: 8003
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: telemetry
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [restful WARNING root] server not running: no certificate configured
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] PerfHandler: starting
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TaskHandler: starting
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"} v 0)
Jan 21 11:05:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: volumes
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] setup complete
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 21 11:05:36 np0005590810 systemd-logind[795]: New session 34 of user ceph-admin.
Jan 21 11:05:36 np0005590810 systemd[1]: Started Session 34 of User ceph-admin.
Jan 21 11:05:36 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.module] Engine started.
Jan 21 11:05:36 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 21 11:05:37 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.ygffhs(active, since 1.03606s)
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14245 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:37.108+0000 7f020bc0f640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 21 11:05:37 np0005590810 gallant_euclid[87724]: Option GRAFANA_API_USERNAME updated
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 systemd[1]: libpod-1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f.scope: Deactivated successfully.
Jan 21 11:05:37 np0005590810 podman[87708]: 2026-01-21 16:05:37.13096203 +0000 UTC m=+9.071612199 container died 1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f (image=quay.io/ceph/ceph:v19, name=gallant_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:05:37 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9fa8f3c8d4d4949ab14ac9cbbb9e81c24f8e7f2dbf38976bf1ecbcdb76ad1a94-merged.mount: Deactivated successfully.
Jan 21 11:05:37 np0005590810 podman[87708]: 2026-01-21 16:05:37.174273073 +0000 UTC m=+9.114923242 container remove 1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f (image=quay.io/ceph/ceph:v19, name=gallant_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:05:37 np0005590810 podman[88011]: 2026-01-21 16:05:37.189985936 +0000 UTC m=+0.061205471 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:05:37 np0005590810 systemd[1]: libpod-conmon-1ae6a2e2234256b9bc4a855c99d50e035baaa29af8cbfc92ea7b7f23cd2b1f8f.scope: Deactivated successfully.
Jan 21 11:05:37 np0005590810 podman[88011]: 2026-01-21 16:05:37.276849119 +0000 UTC m=+0.148068644 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 python3[88085]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Jan 21 11:05:37 np0005590810 podman[88127]: 2026-01-21 16:05:37.524560538 +0000 UTC m=+0.049087799 container create 8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8 (image=quay.io/ceph/ceph:v19, name=determined_wiles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 systemd[1]: Started libpod-conmon-8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8.scope.
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:37] ENGINE Bus STARTING
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:37] ENGINE Bus STARTING
Jan 21 11:05:37 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72cd38c005149aa5f9d1acaf404a4573156cdac9331a954a2e1f941ed98f2619/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72cd38c005149aa5f9d1acaf404a4573156cdac9331a954a2e1f941ed98f2619/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72cd38c005149aa5f9d1acaf404a4573156cdac9331a954a2e1f941ed98f2619/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:37 np0005590810 podman[88127]: 2026-01-21 16:05:37.502920412 +0000 UTC m=+0.027447693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:37 np0005590810 podman[88127]: 2026-01-21 16:05:37.600893212 +0000 UTC m=+0.125420493 container init 8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8 (image=quay.io/ceph/ceph:v19, name=determined_wiles, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 21 11:05:37 np0005590810 podman[88127]: 2026-01-21 16:05:37.607644502 +0000 UTC m=+0.132171763 container start 8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8 (image=quay.io/ceph/ceph:v19, name=determined_wiles, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:37 np0005590810 podman[88127]: 2026-01-21 16:05:37.611385619 +0000 UTC m=+0.135912900 container attach 8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8 (image=quay.io/ceph/ceph:v19, name=determined_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:37] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:37] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:37] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:37] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:37] ENGINE Bus STARTED
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:37] ENGINE Bus STARTED
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:37] ENGINE Client ('192.168.122.100', 51714) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:37] ENGINE Client ('192.168.122.100', 51714) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:05:37 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 21 11:05:37 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Jan 21 11:05:37 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 21 11:05:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:37 np0005590810 determined_wiles[88146]: Option GRAFANA_API_PASSWORD updated
Jan 21 11:05:38 np0005590810 systemd[1]: libpod-8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8.scope: Deactivated successfully.
Jan 21 11:05:38 np0005590810 podman[88127]: 2026-01-21 16:05:38.010852546 +0000 UTC m=+0.535379827 container died 8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8 (image=quay.io/ceph/ceph:v19, name=determined_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:38 np0005590810 systemd[1]: var-lib-containers-storage-overlay-72cd38c005149aa5f9d1acaf404a4573156cdac9331a954a2e1f941ed98f2619-merged.mount: Deactivated successfully.
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v4: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 podman[88127]: 2026-01-21 16:05:38.097387538 +0000 UTC m=+0.621914799 container remove 8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8 (image=quay.io/ceph/ceph:v19, name=determined_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:05:38 np0005590810 systemd[1]: libpod-conmon-8fcbf1f239d2fb13b5a13fcbf31a152a258df89b0bfc877417a2e60d460a41c8.scope: Deactivated successfully.
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Check health
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:37] ENGINE Bus STARTING
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:37] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:05:38 np0005590810 python3[88379]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:38 np0005590810 podman[88380]: 2026-01-21 16:05:38.502918971 +0000 UTC m=+0.045069143 container create 9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d (image=quay.io/ceph/ceph:v19, name=agitated_chaum, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:38 np0005590810 systemd[1]: Started libpod-conmon-9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d.scope.
Jan 21 11:05:38 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:38 np0005590810 podman[88380]: 2026-01-21 16:05:38.481047707 +0000 UTC m=+0.023197899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e1fff5e85f1a9a96d94a56cdca339f89558104cdb81140f6346a066914d0ab/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e1fff5e85f1a9a96d94a56cdca339f89558104cdb81140f6346a066914d0ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e1fff5e85f1a9a96d94a56cdca339f89558104cdb81140f6346a066914d0ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:38 np0005590810 podman[88380]: 2026-01-21 16:05:38.599616247 +0000 UTC m=+0.141766409 container init 9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d (image=quay.io/ceph/ceph:v19, name=agitated_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:38 np0005590810 podman[88380]: 2026-01-21 16:05:38.606437829 +0000 UTC m=+0.148588001 container start 9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d (image=quay.io/ceph/ceph:v19, name=agitated_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:05:38 np0005590810 podman[88380]: 2026-01-21 16:05:38.610909141 +0000 UTC m=+0.153059333 container attach 9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d (image=quay.io/ceph/ceph:v19, name=agitated_chaum, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:05:38 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14268 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Jan 21 11:05:38 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:38 np0005590810 agitated_chaum[88412]: Option ALERTMANAGER_API_HOST updated
Jan 21 11:05:38 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Jan 21 11:05:38 np0005590810 systemd[1]: libpod-9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d.scope: Deactivated successfully.
Jan 21 11:05:38 np0005590810 podman[88380]: 2026-01-21 16:05:38.98863141 +0000 UTC m=+0.530781582 container died 9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d (image=quay.io/ceph/ceph:v19, name=agitated_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 21 11:05:39 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c7e1fff5e85f1a9a96d94a56cdca339f89558104cdb81140f6346a066914d0ab-merged.mount: Deactivated successfully.
Jan 21 11:05:39 np0005590810 podman[88380]: 2026-01-21 16:05:39.022765979 +0000 UTC m=+0.564916141 container remove 9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d (image=quay.io/ceph/ceph:v19, name=agitated_chaum, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:05:39 np0005590810 systemd[1]: libpod-conmon-9b74af562db0f137f14acbda1c0181d8f73667012c50eb4bc8feee1a4af56b0d.scope: Deactivated successfully.
Jan 21 11:05:39 np0005590810 python3[88597]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:39 np0005590810 podman[88668]: 2026-01-21 16:05:39.356839644 +0000 UTC m=+0.038587662 container create 314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0 (image=quay.io/ceph/ceph:v19, name=hardcore_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Jan 21 11:05:39 np0005590810 systemd[1]: Started libpod-conmon-314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0.scope.
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:39 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72147536eb15c745e21a69bb2a440df4e6fc4ce51f3faec008cc8a91f4fbb511/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72147536eb15c745e21a69bb2a440df4e6fc4ce51f3faec008cc8a91f4fbb511/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72147536eb15c745e21a69bb2a440df4e6fc4ce51f3faec008cc8a91f4fbb511/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:39 np0005590810 podman[88668]: 2026-01-21 16:05:39.424770033 +0000 UTC m=+0.106518071 container init 314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0 (image=quay.io/ceph/ceph:v19, name=hardcore_meitner, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:39 np0005590810 podman[88668]: 2026-01-21 16:05:39.430153636 +0000 UTC m=+0.111901644 container start 314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0 (image=quay.io/ceph/ceph:v19, name=hardcore_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:05:39 np0005590810 podman[88668]: 2026-01-21 16:05:39.433813821 +0000 UTC m=+0.115561839 container attach 314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0 (image=quay.io/ceph/ceph:v19, name=hardcore_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 21 11:05:39 np0005590810 podman[88668]: 2026-01-21 16:05:39.340064004 +0000 UTC m=+0.021812042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e36 e36: 2 total, 2 up, 2 in
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e36: 2 total, 2 up, 2 in
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.1d( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.19( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.13( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.15( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.10( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.13( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.10( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.14( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.e( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.b( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.a( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.c( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.9( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.d( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.8( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.a( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.1( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.4( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.3( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.6( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.2( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.4( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.6( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.9( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.1b( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.1e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.18( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.1f( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[7.1b( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[2.1e( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.903543472s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.020874023s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.903512001s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.020874023s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.043819427s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.161537170s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.043792725s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.161537170s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.18( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.902141571s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.020851135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.18( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.902109146s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.020851135s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.1d( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.901715279s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.020919800s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.1d( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.901690483s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.020919800s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.901352882s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.020889282s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.901327133s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.020889282s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.901205063s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021049500s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.901184082s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021049500s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.1c( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.900624275s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.020927429s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.1c( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.900602341s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.020927429s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900699615s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021308899s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900669098s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021308899s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.040797234s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.161560059s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.040775299s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.161560059s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.040925980s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.161842346s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.040907860s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.161842346s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.1a( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.900453568s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021408081s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.1a( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.900436401s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021408081s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900250435s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021377563s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900229454s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021377563s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900225639s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021423340s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.9( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.900195122s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021446228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.9( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.900184631s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021446228s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899799347s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021163940s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899758339s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021163940s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900010109s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021492004s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899962425s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021492004s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899920464s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021423340s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899700165s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021522522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899673462s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021522522s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039878845s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.161773682s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039806366s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.161773682s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.3( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.899691582s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021705627s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.3( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.899674416s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021705627s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899314880s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021537781s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899537086s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021781921s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039919853s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162200928s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899517059s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021781921s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039902687s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162200928s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899681091s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022010803s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899663925s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022010803s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.5( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.899628639s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022071838s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.5( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.899610519s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022071838s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039960861s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162513733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039925575s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162513733s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899499893s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022155762s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899480820s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022155762s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039608955s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162338257s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899299622s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021537781s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039595604s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162338257s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039458275s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162368774s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.039443970s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162368774s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.4( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898996353s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.021881104s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898939133s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022018433s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898920059s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022018433s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.4( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898790359s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.021881104s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.a( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898907661s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022369385s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.a( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898883820s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022369385s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898929596s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022476196s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.038958549s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162513733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.038926125s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162513733s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898900032s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022476196s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.c( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898846626s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022567749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899028778s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022766113s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.d( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898849487s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022583008s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899009705s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022766113s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.c( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898823738s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022567749s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.d( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898833275s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022583008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899039268s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022468567s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.038638115s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162483215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899180412s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023071289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.038616180s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162483215s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.e( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.899018288s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.022918701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.899167061s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023071289s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.e( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.899001122s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022918701s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898989677s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023010254s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898973465s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023010254s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.f( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898993492s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023056030s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.9( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898870468s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023002625s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.f( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898977280s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023056030s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.9( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898853302s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023002625s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.038311005s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162506104s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.038266182s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162506104s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.10( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898797989s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023124695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.10( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898775101s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023124695s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.16( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898721695s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023101807s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.038077354s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162521362s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.16( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898701668s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023101807s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.038044930s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162521362s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.11( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898562431s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023086548s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.11( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898547173s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023086548s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898538589s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023231506s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898514748s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023231506s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.037819862s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162582397s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.13( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898502350s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023307800s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.037792206s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162582397s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.13( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898483276s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023307800s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898301125s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023239136s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898281097s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023239136s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.14( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898330688s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023315430s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898294449s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023307800s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.14( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898295403s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023315430s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898274422s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023307800s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.15( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898316383s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.023498535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.15( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.898293495s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.023498535s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.16( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.900463104s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 74.025718689s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[3.16( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=36 pruub=15.900449753s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.025718689s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.10( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900440216s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.025787354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.10( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900421143s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.025787354s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.11( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900501251s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.025924683s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.11( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900478363s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.025924683s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.037343979s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162818909s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.037329674s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162818909s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900352478s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.025985718s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.037205696s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active pruub 67.162857056s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900249481s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 active pruub 74.025909424s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[5.1f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900340080s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.025985718s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=36 pruub=9.037188530s) [1] r=-1 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.162857056s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.900225639s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.025909424s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=36 pruub=15.898606300s) [1] r=-1 lpr=36 pi=[31,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.022468567s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14272 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:39 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:37] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:37] ENGINE Bus STARTED
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:37] ENGINE Client ('192.168.122.100', 51714) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:05:39 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:40 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.ygffhs(active, since 3s)
Jan 21 11:05:40 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Jan 21 11:05:40 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:40 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 21 11:05:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:40 np0005590810 hardcore_meitner[88711]: Option PROMETHEUS_API_HOST updated
Jan 21 11:05:40 np0005590810 systemd[1]: libpod-314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0.scope: Deactivated successfully.
Jan 21 11:05:40 np0005590810 podman[88668]: 2026-01-21 16:05:40.93521967 +0000 UTC m=+1.616967688 container died 314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0 (image=quay.io/ceph/ceph:v19, name=hardcore_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 21 11:05:40 np0005590810 systemd[1]: var-lib-containers-storage-overlay-72147536eb15c745e21a69bb2a440df4e6fc4ce51f3faec008cc8a91f4fbb511-merged.mount: Deactivated successfully.
Jan 21 11:05:40 np0005590810 podman[88668]: 2026-01-21 16:05:40.977373523 +0000 UTC m=+1.659121541 container remove 314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0 (image=quay.io/ceph/ceph:v19, name=hardcore_meitner, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 21 11:05:40 np0005590810 systemd[1]: libpod-conmon-314f211d79eb70ac199cd645ce8f1030e8e56d750a37a0b3ee66f7da73349cc0.scope: Deactivated successfully.
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e37 e37: 2 total, 2 up, 2 in
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e37: 2 total, 2 up, 2 in
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.1b( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.1f( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.18( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.1e( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.1e( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.4( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.2( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.6( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.3( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.4( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.1( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.e( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.9( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.8( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.9( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.a( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.e( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.14( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.f( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.b( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.10( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.13( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[2.19( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.1d( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 37 pg[7.6( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[33,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:05:41 np0005590810 python3[89442]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 40a19c9d-debe-4905-9e9b-dfee3d67f9ae (Updating crash deployment (+1 -> 3))
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:05:41 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 21 11:05:41 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 21 11:05:41 np0005590810 podman[89443]: 2026-01-21 16:05:41.483817586 +0000 UTC m=+0.038849051 container create 5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526 (image=quay.io/ceph/ceph:v19, name=pensive_pare, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:05:41 np0005590810 systemd[1]: Started libpod-conmon-5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526.scope.
Jan 21 11:05:41 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a832840b9961c38fcb9e89d028592c2f11fc40513ee672e08b91fbc5341321/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a832840b9961c38fcb9e89d028592c2f11fc40513ee672e08b91fbc5341321/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a832840b9961c38fcb9e89d028592c2f11fc40513ee672e08b91fbc5341321/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:41 np0005590810 podman[89443]: 2026-01-21 16:05:41.467375567 +0000 UTC m=+0.022407062 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:41 np0005590810 podman[89443]: 2026-01-21 16:05:41.571408103 +0000 UTC m=+0.126439588 container init 5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526 (image=quay.io/ceph/ceph:v19, name=pensive_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:05:41 np0005590810 podman[89443]: 2026-01-21 16:05:41.577827502 +0000 UTC m=+0.132858967 container start 5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526 (image=quay.io/ceph/ceph:v19, name=pensive_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:05:41 np0005590810 podman[89443]: 2026-01-21 16:05:41.584407585 +0000 UTC m=+0.139439090 container attach 5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526 (image=quay.io/ceph/ceph:v19, name=pensive_pare, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 21 11:05:41 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14276 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 21 11:05:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:41 np0005590810 pensive_pare[89459]: Option GRAFANA_API_URL updated
Jan 21 11:05:41 np0005590810 systemd[1]: libpod-5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526.scope: Deactivated successfully.
Jan 21 11:05:41 np0005590810 podman[89443]: 2026-01-21 16:05:41.987691562 +0000 UTC m=+0.542723027 container died 5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526 (image=quay.io/ceph/ceph:v19, name=pensive_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:05:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d8a832840b9961c38fcb9e89d028592c2f11fc40513ee672e08b91fbc5341321-merged.mount: Deactivated successfully.
Jan 21 11:05:42 np0005590810 podman[89443]: 2026-01-21 16:05:42.021946186 +0000 UTC m=+0.576977651 container remove 5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526 (image=quay.io/ceph/ceph:v19, name=pensive_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:42 np0005590810 systemd[1]: libpod-conmon-5cd1ca525880c81fde7cff3399132d62704afdc2ef38d17ce36e67a6af71c526.scope: Deactivated successfully.
Jan 21 11:05:42 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Jan 21 11:05:42 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Jan 21 11:05:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v8: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: Deploying daemon crash.compute-2 on compute-2
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: from='mgr.14243 192.168.122.100:0/399389207' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:42 np0005590810 python3[89521]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:42 np0005590810 podman[89522]: 2026-01-21 16:05:42.339418917 +0000 UTC m=+0.037431644 container create 8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8 (image=quay.io/ceph/ceph:v19, name=inspiring_clarke, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:42 np0005590810 systemd[1]: Started libpod-conmon-8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8.scope.
Jan 21 11:05:42 np0005590810 podman[89522]: 2026-01-21 16:05:42.324360355 +0000 UTC m=+0.022373102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2c7a8da7dd53ca57377d81dd5bc3d8cf8b9d7469a87e6ea9f66ee0c725968d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2c7a8da7dd53ca57377d81dd5bc3d8cf8b9d7469a87e6ea9f66ee0c725968d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2c7a8da7dd53ca57377d81dd5bc3d8cf8b9d7469a87e6ea9f66ee0c725968d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:42 np0005590810 podman[89522]: 2026-01-21 16:05:42.444801348 +0000 UTC m=+0.142814095 container init 8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8 (image=quay.io/ceph/ceph:v19, name=inspiring_clarke, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:05:42 np0005590810 podman[89522]: 2026-01-21 16:05:42.451560738 +0000 UTC m=+0.149573455 container start 8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8 (image=quay.io/ceph/ceph:v19, name=inspiring_clarke, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:42 np0005590810 podman[89522]: 2026-01-21 16:05:42.454998485 +0000 UTC m=+0.153011602 container attach 8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8 (image=quay.io/ceph/ceph:v19, name=inspiring_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 21 11:05:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1006953982' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 21 11:05:43 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 21 11:05:43 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 21 11:05:43 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1006953982' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 21 11:05:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1006953982' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  1: '-n'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  2: 'mgr.compute-0.ygffhs'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  3: '-f'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  4: '--setuser'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  5: 'ceph'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  6: '--setgroup'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  7: 'ceph'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  8: '--default-log-to-file=false'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  9: '--default-log-to-journald=true'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr respawn  exe_path /proc/self/exe
Jan 21 11:05:43 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.ygffhs(active, since 7s)
Jan 21 11:05:43 np0005590810 systemd[1]: libpod-8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8.scope: Deactivated successfully.
Jan 21 11:05:43 np0005590810 podman[89522]: 2026-01-21 16:05:43.385682047 +0000 UTC m=+1.083694784 container died 8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8 (image=quay.io/ceph/ceph:v19, name=inspiring_clarke, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:43 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9b2c7a8da7dd53ca57377d81dd5bc3d8cf8b9d7469a87e6ea9f66ee0c725968d-merged.mount: Deactivated successfully.
Jan 21 11:05:43 np0005590810 podman[89522]: 2026-01-21 16:05:43.423843924 +0000 UTC m=+1.121856651 container remove 8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8 (image=quay.io/ceph/ceph:v19, name=inspiring_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 21 11:05:43 np0005590810 systemd[1]: libpod-conmon-8e60e60d91310253d6a23b6d1a8c74a0582296e54fc82f24c04a506b069454b8.scope: Deactivated successfully.
Jan 21 11:05:43 np0005590810 systemd[1]: session-34.scope: Deactivated successfully.
Jan 21 11:05:43 np0005590810 systemd[1]: session-34.scope: Consumed 4.248s CPU time.
Jan 21 11:05:43 np0005590810 systemd-logind[795]: Session 34 logged out. Waiting for processes to exit.
Jan 21 11:05:43 np0005590810 systemd-logind[795]: Removed session 34.
Jan 21 11:05:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setuser ceph since I am not root
Jan 21 11:05:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setgroup ceph since I am not root
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: pidfile_write: ignore empty --pid-file
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'alerts'
Jan 21 11:05:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:43.616+0000 7f17fae6e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'balancer'
Jan 21 11:05:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:43.709+0000 7f17fae6e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:05:43 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'cephadm'
Jan 21 11:05:43 np0005590810 python3[89618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:43 np0005590810 podman[89619]: 2026-01-21 16:05:43.799521303 +0000 UTC m=+0.049563296 container create 6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6 (image=quay.io/ceph/ceph:v19, name=frosty_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 21 11:05:43 np0005590810 systemd[1]: Started libpod-conmon-6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6.scope.
Jan 21 11:05:43 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:43 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a554505623a2366b2c938e7b839e50a4b71d4ba4c206f5847ddfc7c72a21a44/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:43 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a554505623a2366b2c938e7b839e50a4b71d4ba4c206f5847ddfc7c72a21a44/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:43 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a554505623a2366b2c938e7b839e50a4b71d4ba4c206f5847ddfc7c72a21a44/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:43 np0005590810 podman[89619]: 2026-01-21 16:05:43.779879515 +0000 UTC m=+0.029921528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:43 np0005590810 podman[89619]: 2026-01-21 16:05:43.88389634 +0000 UTC m=+0.133938343 container init 6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6 (image=quay.io/ceph/ceph:v19, name=frosty_maxwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 21 11:05:43 np0005590810 podman[89619]: 2026-01-21 16:05:43.889885294 +0000 UTC m=+0.139927287 container start 6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6 (image=quay.io/ceph/ceph:v19, name=frosty_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:05:43 np0005590810 podman[89619]: 2026-01-21 16:05:43.893380413 +0000 UTC m=+0.143422436 container attach 6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6 (image=quay.io/ceph/ceph:v19, name=frosty_maxwell, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:05:44 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 21 11:05:44 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 21 11:05:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 21 11:05:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3693143317' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 21 11:05:44 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/1006953982' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 21 11:05:44 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3693143317' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 21 11:05:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3693143317' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 21 11:05:44 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.ygffhs(active, since 8s)
Jan 21 11:05:44 np0005590810 systemd[1]: libpod-6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6.scope: Deactivated successfully.
Jan 21 11:05:44 np0005590810 podman[89619]: 2026-01-21 16:05:44.394711142 +0000 UTC m=+0.644753135 container died 6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6 (image=quay.io/ceph/ceph:v19, name=frosty_maxwell, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:05:44 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3a554505623a2366b2c938e7b839e50a4b71d4ba4c206f5847ddfc7c72a21a44-merged.mount: Deactivated successfully.
Jan 21 11:05:44 np0005590810 podman[89619]: 2026-01-21 16:05:44.431554094 +0000 UTC m=+0.681596087 container remove 6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6 (image=quay.io/ceph/ceph:v19, name=frosty_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 11:05:44 np0005590810 systemd[1]: libpod-conmon-6bec57fe5659fa330339cc95a9587565ca96ff06a34473e78624be66a9cf06c6.scope: Deactivated successfully.
Jan 21 11:05:44 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'crash'
Jan 21 11:05:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:44.514+0000 7f17fae6e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:05:44 np0005590810 ceph-mgr[74671]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:05:44 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'dashboard'
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'devicehealth'
Jan 21 11:05:45 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 21 11:05:45 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 21 11:05:45 np0005590810 python3[89759]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 11:05:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:45.176+0000 7f17fae6e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 11:05:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 11:05:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 11:05:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  from numpy import show_config as show_numpy_config
Jan 21 11:05:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:45.353+0000 7f17fae6e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'influx'
Jan 21 11:05:45 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3693143317' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 21 11:05:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:45.424+0000 7f17fae6e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'insights'
Jan 21 11:05:45 np0005590810 python3[89830]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769011544.9202137-37514-274580541332024/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'iostat'
Jan 21 11:05:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:45.569+0000 7f17fae6e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'k8sevents'
Jan 21 11:05:45 np0005590810 python3[89880]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:45 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'localpool'
Jan 21 11:05:45 np0005590810 podman[89881]: 2026-01-21 16:05:45.984360702 +0000 UTC m=+0.038231011 container create cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7 (image=quay.io/ceph/ceph:v19, name=objective_fermi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 11:05:46 np0005590810 systemd[1]: Started libpod-conmon-cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7.scope.
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 11:05:46 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cf3e26d16ff0bd51ea12a549b8689913c5142428e2c367784e62f79e3ab9af/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cf3e26d16ff0bd51ea12a549b8689913c5142428e2c367784e62f79e3ab9af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cf3e26d16ff0bd51ea12a549b8689913c5142428e2c367784e62f79e3ab9af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:46 np0005590810 podman[89881]: 2026-01-21 16:05:46.057181207 +0000 UTC m=+0.111051526 container init cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7 (image=quay.io/ceph/ceph:v19, name=objective_fermi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 11:05:46 np0005590810 podman[89881]: 2026-01-21 16:05:45.968341526 +0000 UTC m=+0.022211855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:46 np0005590810 podman[89881]: 2026-01-21 16:05:46.079610919 +0000 UTC m=+0.133481228 container start cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7 (image=quay.io/ceph/ceph:v19, name=objective_fermi, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:05:46 np0005590810 podman[89881]: 2026-01-21 16:05:46.082882529 +0000 UTC m=+0.136752868 container attach cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7 (image=quay.io/ceph/ceph:v19, name=objective_fermi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:05:46 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 21 11:05:46 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mirroring'
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'nfs'
Jan 21 11:05:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:46.572+0000 7f17fae6e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'orchestrator'
Jan 21 11:05:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:46.805+0000 7f17fae6e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 11:05:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:46.880+0000 7f17fae6e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_support'
Jan 21 11:05:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:46.945+0000 7f17fae6e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:05:46 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 11:05:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:47.025+0000 7f17fae6e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'progress'
Jan 21 11:05:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:47.094+0000 7f17fae6e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'prometheus'
Jan 21 11:05:47 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts
Jan 21 11:05:47 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok
Jan 21 11:05:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:47.412+0000 7f17fae6e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rbd_support'
Jan 21 11:05:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:47.501+0000 7f17fae6e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'restful'
Jan 21 11:05:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rgw'
Jan 21 11:05:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:47.941+0000 7f17fae6e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:05:47 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rook'
Jan 21 11:05:48 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Jan 21 11:05:48 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Jan 21 11:05:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:48.534+0000 7f17fae6e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'selftest'
Jan 21 11:05:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:48.606+0000 7f17fae6e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'snap_schedule'
Jan 21 11:05:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:48.685+0000 7f17fae6e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'stats'
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'status'
Jan 21 11:05:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:48.839+0000 7f17fae6e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telegraf'
Jan 21 11:05:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:48.913+0000 7f17fae6e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:05:48 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telemetry'
Jan 21 11:05:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:49.083+0000 7f17fae6e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 11:05:49 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 21 11:05:49 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 21 11:05:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:49.335+0000 7f17fae6e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'volumes'
Jan 21 11:05:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:49.607+0000 7f17fae6e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'zabbix'
Jan 21 11:05:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:49.693+0000 7f17fae6e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ygffhs restarted
Jan 21 11:05:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 21 11:05:49 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ygffhs
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x55e25b034d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  1: '-n'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  2: 'mgr.compute-0.ygffhs'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  3: '-f'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  4: '--setuser'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  5: 'ceph'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  6: '--setgroup'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  7: 'ceph'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  8: '--default-log-to-file=false'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  9: '--default-log-to-journald=true'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr respawn  exe_path /proc/self/exe
Jan 21 11:05:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setuser ceph since I am not root
Jan 21 11:05:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setgroup ceph since I am not root
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: pidfile_write: ignore empty --pid-file
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'alerts'
Jan 21 11:05:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:49.906+0000 7eff474cf140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'balancer'
Jan 21 11:05:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:49.991+0000 7eff474cf140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:05:49 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'cephadm'
Jan 21 11:05:50 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Jan 21 11:05:50 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Jan 21 11:05:50 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'crash'
Jan 21 11:05:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:50.808+0000 7eff474cf140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:05:50 np0005590810 ceph-mgr[74671]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:05:50 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'dashboard'
Jan 21 11:05:51 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 21 11:05:51 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'devicehealth'
Jan 21 11:05:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:51.461+0000 7eff474cf140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 11:05:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 11:05:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 11:05:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  from numpy import show_config as show_numpy_config
Jan 21 11:05:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:51.634+0000 7eff474cf140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'influx'
Jan 21 11:05:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:51.702+0000 7eff474cf140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'insights'
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'iostat'
Jan 21 11:05:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:51.841+0000 7eff474cf140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:05:51 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'k8sevents'
Jan 21 11:05:52 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 21 11:05:52 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 21 11:05:52 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'localpool'
Jan 21 11:05:52 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 11:05:52 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mirroring'
Jan 21 11:05:52 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'nfs'
Jan 21 11:05:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:52.920+0000 7eff474cf140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:05:52 np0005590810 ceph-mgr[74671]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:05:52 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'orchestrator'
Jan 21 11:05:53 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 21 11:05:53 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 21 11:05:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e38 e38: 2 total, 2 up, 2 in
Jan 21 11:05:53 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e38: 2 total, 2 up, 2 in
Jan 21 11:05:53 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.ygffhs(active, starting, since 3s)
Jan 21 11:05:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:53.167+0000 7eff474cf140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 11:05:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:53.257+0000 7eff474cf140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_support'
Jan 21 11:05:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:53.334+0000 7eff474cf140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 11:05:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:53.414+0000 7eff474cf140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'progress'
Jan 21 11:05:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:53.497+0000 7eff474cf140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'prometheus'
Jan 21 11:05:53 np0005590810 systemd[1]: Stopping User Manager for UID 42477...
Jan 21 11:05:53 np0005590810 systemd[75652]: Activating special unit Exit the Session...
Jan 21 11:05:53 np0005590810 systemd[75652]: Stopped target Main User Target.
Jan 21 11:05:53 np0005590810 systemd[75652]: Stopped target Basic System.
Jan 21 11:05:53 np0005590810 systemd[75652]: Stopped target Paths.
Jan 21 11:05:53 np0005590810 systemd[75652]: Stopped target Sockets.
Jan 21 11:05:53 np0005590810 systemd[75652]: Stopped target Timers.
Jan 21 11:05:53 np0005590810 systemd[75652]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 21 11:05:53 np0005590810 systemd[75652]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 11:05:53 np0005590810 systemd[75652]: Closed D-Bus User Message Bus Socket.
Jan 21 11:05:53 np0005590810 systemd[75652]: Stopped Create User's Volatile Files and Directories.
Jan 21 11:05:53 np0005590810 systemd[75652]: Removed slice User Application Slice.
Jan 21 11:05:53 np0005590810 systemd[75652]: Reached target Shutdown.
Jan 21 11:05:53 np0005590810 systemd[75652]: Finished Exit the Session.
Jan 21 11:05:53 np0005590810 systemd[75652]: Reached target Exit the Session.
Jan 21 11:05:53 np0005590810 systemd[1]: user@42477.service: Deactivated successfully.
Jan 21 11:05:53 np0005590810 systemd[1]: Stopped User Manager for UID 42477.
Jan 21 11:05:53 np0005590810 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 21 11:05:53 np0005590810 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 21 11:05:53 np0005590810 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 21 11:05:53 np0005590810 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 21 11:05:53 np0005590810 systemd[1]: Removed slice User Slice of UID 42477.
Jan 21 11:05:53 np0005590810 systemd[1]: user-42477.slice: Consumed 22.451s CPU time.
Jan 21 11:05:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:53.870+0000 7eff474cf140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rbd_support'
Jan 21 11:05:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:53.970+0000 7eff474cf140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:05:53 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'restful'
Jan 21 11:05:54 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Jan 21 11:05:54 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Jan 21 11:05:54 np0005590810 ceph-mon[74380]: Active manager daemon compute-0.ygffhs restarted
Jan 21 11:05:54 np0005590810 ceph-mon[74380]: Activating manager daemon compute-0.ygffhs
Jan 21 11:05:54 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rgw'
Jan 21 11:05:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:54.428+0000 7eff474cf140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:05:54 np0005590810 ceph-mgr[74671]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:05:54 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rook'
Jan 21 11:05:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:54.996+0000 7eff474cf140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:05:54 np0005590810 ceph-mgr[74671]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:05:54 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'selftest'
Jan 21 11:05:55 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Jan 21 11:05:55 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Jan 21 11:05:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:55.074+0000 7eff474cf140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'snap_schedule'
Jan 21 11:05:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:55.153+0000 7eff474cf140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'stats'
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'status'
Jan 21 11:05:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:55.320+0000 7eff474cf140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telegraf'
Jan 21 11:05:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:55.395+0000 7eff474cf140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telemetry'
Jan 21 11:05:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:55.562+0000 7eff474cf140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 11:05:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:55.808+0000 7eff474cf140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:05:55 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'volumes'
Jan 21 11:05:56 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 21 11:05:56 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 21 11:05:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:56.087+0000 7eff474cf140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'zabbix'
Jan 21 11:05:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:05:56.164+0000 7eff474cf140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ygffhs restarted
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ygffhs
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x559b18f08d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e39 e39: 2 total, 2 up, 2 in
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map Activating!
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map I am now activating
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e39: 2 total, 2 up, 2 in
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.ygffhs(active, starting, since 0.0213623s)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: balancer
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Starting
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Manager daemon compute-0.ygffhs is now available
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:05:56
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: cephadm
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: crash
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: dashboard
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO sso] Loading SSO DB version=1
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: devicehealth
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Starting
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: iostat
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: nfs
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: orchestrator
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: pg_autoscaler
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: progress
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [progress INFO root] Loading...
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7efecb0939d0>, <progress.module.GhostEvent object at 0x7efecb093c10>, <progress.module.GhostEvent object at 0x7efecb093c40>, <progress.module.GhostEvent object at 0x7efecb093c70>, <progress.module.GhostEvent object at 0x7efecb093ca0>, <progress.module.GhostEvent object at 0x7efecb093cd0>, <progress.module.GhostEvent object at 0x7efecb093d00>, <progress.module.GhostEvent object at 0x7efecb093d30>] historic events
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] recovery thread starting
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] starting setup
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: rbd_support
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: restful
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: status
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [restful INFO root] server_addr: :: server_port: 8003
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: telemetry
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [restful WARNING root] server not running: no certificate configured
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] PerfHandler: starting
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TaskHandler: starting
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"} v 0)
Jan 21 11:05:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: volumes
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] setup complete
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 21 11:05:56 np0005590810 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 11:05:56 np0005590810 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 11:05:56 np0005590810 systemd-logind[795]: New session 35 of user ceph-admin.
Jan 21 11:05:56 np0005590810 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 11:05:56 np0005590810 systemd[1]: Starting User Manager for UID 42477...
Jan 21 11:05:56 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.module] Engine started.
Jan 21 11:05:56 np0005590810 systemd[90084]: Queued start job for default target Main User Target.
Jan 21 11:05:56 np0005590810 systemd[90084]: Created slice User Application Slice.
Jan 21 11:05:56 np0005590810 systemd[90084]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 11:05:56 np0005590810 systemd[90084]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 11:05:56 np0005590810 systemd[90084]: Reached target Paths.
Jan 21 11:05:56 np0005590810 systemd[90084]: Reached target Timers.
Jan 21 11:05:56 np0005590810 systemd[90084]: Starting D-Bus User Message Bus Socket...
Jan 21 11:05:56 np0005590810 systemd[90084]: Starting Create User's Volatile Files and Directories...
Jan 21 11:05:56 np0005590810 systemd[90084]: Finished Create User's Volatile Files and Directories.
Jan 21 11:05:56 np0005590810 systemd[90084]: Listening on D-Bus User Message Bus Socket.
Jan 21 11:05:56 np0005590810 systemd[90084]: Reached target Sockets.
Jan 21 11:05:56 np0005590810 systemd[90084]: Reached target Basic System.
Jan 21 11:05:56 np0005590810 systemd[90084]: Reached target Main User Target.
Jan 21 11:05:56 np0005590810 systemd[90084]: Startup finished in 140ms.
Jan 21 11:05:56 np0005590810 systemd[1]: Started User Manager for UID 42477.
Jan 21 11:05:56 np0005590810 systemd[1]: Started Session 35 of User ceph-admin.
Jan 21 11:05:57 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 21 11:05:57 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: Active manager daemon compute-0.ygffhs restarted
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: Activating manager daemon compute-0.ygffhs
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: Manager daemon compute-0.ygffhs is now available
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.ygffhs(active, since 1.21742s)
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14292 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 21 11:05:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0[74376]: 2026-01-21T16:05:57.396+0000 7f56a7419640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e2 new map
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-21T16:05:57:396348+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:05:57.396255+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e40 e40: 2 total, 2 up, 2 in
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e40: 2 total, 2 up, 2 in
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 21 11:05:57 np0005590810 systemd[1]: libpod-cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7.scope: Deactivated successfully.
Jan 21 11:05:57 np0005590810 podman[89881]: 2026-01-21 16:05:57.475289247 +0000 UTC m=+11.529159586 container died cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7 (image=quay.io/ceph/ceph:v19, name=objective_fermi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:57] ENGINE Bus STARTING
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:57] ENGINE Bus STARTING
Jan 21 11:05:57 np0005590810 systemd[1]: var-lib-containers-storage-overlay-49cf3e26d16ff0bd51ea12a549b8689913c5142428e2c367784e62f79e3ab9af-merged.mount: Deactivated successfully.
Jan 21 11:05:57 np0005590810 podman[89881]: 2026-01-21 16:05:57.522363637 +0000 UTC m=+11.576233946 container remove cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7 (image=quay.io/ceph/ceph:v19, name=objective_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:05:57 np0005590810 systemd[1]: libpod-conmon-cde26796edcee46a739eb71bb135d2e6c0a2747c081e8311bebeb81493309ee7.scope: Deactivated successfully.
Jan 21 11:05:57 np0005590810 podman[90225]: 2026-01-21 16:05:57.565501063 +0000 UTC m=+0.071763209 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:57] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:57] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:05:57 np0005590810 podman[90225]: 2026-01-21 16:05:57.667781859 +0000 UTC m=+0.174043965 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:57] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:57] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:57] ENGINE Bus STARTED
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:57] ENGINE Bus STARTED
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:05:57] ENGINE Client ('192.168.122.100', 36692) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:05:57 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:05:57] ENGINE Client ('192.168.122.100', 36692) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:05:57 np0005590810 python3[90307]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:57 np0005590810 podman[90351]: 2026-01-21 16:05:57.868061787 +0000 UTC m=+0.038718187 container create 6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d (image=quay.io/ceph/ceph:v19, name=compassionate_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:05:57 np0005590810 systemd[1]: Started libpod-conmon-6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d.scope.
Jan 21 11:05:57 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ae83cf5249168c1b7d44579dcdc6e1d75b0aedcf73ca395cc7d75c9954d1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ae83cf5249168c1b7d44579dcdc6e1d75b0aedcf73ca395cc7d75c9954d1b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298ae83cf5249168c1b7d44579dcdc6e1d75b0aedcf73ca395cc7d75c9954d1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:57 np0005590810 podman[90351]: 2026-01-21 16:05:57.938425949 +0000 UTC m=+0.109082369 container init 6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d (image=quay.io/ceph/ceph:v19, name=compassionate_kilby, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:05:57 np0005590810 podman[90351]: 2026-01-21 16:05:57.94433402 +0000 UTC m=+0.114990420 container start 6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d (image=quay.io/ceph/ceph:v19, name=compassionate_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:05:57 np0005590810 podman[90351]: 2026-01-21 16:05:57.851647609 +0000 UTC m=+0.022304039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:57 np0005590810 podman[90351]: 2026-01-21 16:05:57.947374853 +0000 UTC m=+0.118031283 container attach 6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d (image=quay.io/ceph/ceph:v19, name=compassionate_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:05:57 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 21 11:05:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:57 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:05:58 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14312 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:58 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:05:58 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 11:05:58 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Check health
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 compassionate_kilby[90383]: Scheduled mds.cephfs update...
Jan 21 11:05:58 np0005590810 systemd[1]: libpod-6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d.scope: Deactivated successfully.
Jan 21 11:05:58 np0005590810 podman[90351]: 2026-01-21 16:05:58.366790468 +0000 UTC m=+0.537446868 container died 6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d (image=quay.io/ceph/ceph:v19, name=compassionate_kilby, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:05:58 np0005590810 systemd[1]: var-lib-containers-storage-overlay-298ae83cf5249168c1b7d44579dcdc6e1d75b0aedcf73ca395cc7d75c9954d1b-merged.mount: Deactivated successfully.
Jan 21 11:05:58 np0005590810 podman[90351]: 2026-01-21 16:05:58.400477563 +0000 UTC m=+0.571133973 container remove 6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d (image=quay.io/ceph/ceph:v19, name=compassionate_kilby, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 11:05:58 np0005590810 systemd[1]: libpod-conmon-6e02a4243fa732f56537139cd54af9594e41340c319a6097bc3a4c96f6469e1d.scope: Deactivated successfully.
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:57] ENGINE Bus STARTING
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:57] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:57] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:57] ENGINE Bus STARTED
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:05:57] ENGINE Client ('192.168.122.100', 36692) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:58 np0005590810 python3[90525]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:05:58 np0005590810 podman[90537]: 2026-01-21 16:05:58.777166235 +0000 UTC m=+0.042608809 container create 7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc (image=quay.io/ceph/ceph:v19, name=gracious_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:05:58 np0005590810 systemd[1]: Started libpod-conmon-7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc.scope.
Jan 21 11:05:58 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:05:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b92ea224b679d5a2d5a6f0f5ce2b2af4d1824d043026b367c260451a632861e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b92ea224b679d5a2d5a6f0f5ce2b2af4d1824d043026b367c260451a632861e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b92ea224b679d5a2d5a6f0f5ce2b2af4d1824d043026b367c260451a632861e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:05:58 np0005590810 podman[90537]: 2026-01-21 16:05:58.849064059 +0000 UTC m=+0.114506653 container init 7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc (image=quay.io/ceph/ceph:v19, name=gracious_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:05:58 np0005590810 podman[90537]: 2026-01-21 16:05:58.854544935 +0000 UTC m=+0.119987509 container start 7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc (image=quay.io/ceph/ceph:v19, name=gracious_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:05:58 np0005590810 podman[90537]: 2026-01-21 16:05:58.760994706 +0000 UTC m=+0.026437300 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:05:58 np0005590810 podman[90537]: 2026-01-21 16:05:58.85791987 +0000 UTC m=+0.123362464 container attach 7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc (image=quay.io/ceph/ceph:v19, name=gracious_golick, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:05:58 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 21 11:05:58 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 21 11:05:59 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.ygffhs(active, since 2s)
Jan 21 11:05:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:05:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:05:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:05:59 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14320 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:05:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Jan 21 11:05:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 21 11:05:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:05:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:05:59 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 21 11:05:59 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 21 11:06:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e41 e41: 2 total, 2 up, 2 in
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e41: 2 total, 2 up, 2 in
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.ygffhs(active, since 4s)
Jan 21 11:06:00 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 41 pg[8.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Jan 21 11:06:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 21 11:06:00 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 21 11:06:00 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-mon[74380]: Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:06:01 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 21 11:06:01 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 21 11:06:02 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 21 11:06:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e42 e42: 2 total, 2 up, 2 in
Jan 21 11:06:02 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e42: 2 total, 2 up, 2 in
Jan 21 11:06:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 42 pg[8.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v9: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:02 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 21 11:06:02 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:02 np0005590810 systemd[1]: libpod-7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc.scope: Deactivated successfully.
Jan 21 11:06:02 np0005590810 podman[90537]: 2026-01-21 16:06:02.239857056 +0000 UTC m=+3.505299630 container died 7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc (image=quay.io/ceph/ceph:v19, name=gracious_golick, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 21 11:06:02 np0005590810 systemd[1]: var-lib-containers-storage-overlay-1b92ea224b679d5a2d5a6f0f5ce2b2af4d1824d043026b367c260451a632861e-merged.mount: Deactivated successfully.
Jan 21 11:06:02 np0005590810 podman[90537]: 2026-01-21 16:06:02.276661567 +0000 UTC m=+3.542104141 container remove 7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc (image=quay.io/ceph/ceph:v19, name=gracious_golick, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 21 11:06:02 np0005590810 systemd[1]: libpod-conmon-7736f702eaefd4568643408d5ae74f158d3eb6a26276a5be71d6550662f3f7dc.scope: Deactivated successfully.
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:02 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:03 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Jan 21 11:06:03 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e43 e43: 2 total, 2 up, 2 in
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.ygffhs(active, since 7s)
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e43: 2 total, 2 up, 2 in
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 python3[91632]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:06:03 np0005590810 python3[91710]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769011563.083994-37567-93193974485042/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=2ea395d6108431abaf3eb9a42be6b8fa8c96438d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:03 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 60c4767d-1c36-4690-8fbd-69964c205c71 (Updating mgr deployment (+2 -> 3))
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdxyxe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 11:06:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdxyxe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdxyxe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:04 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.kdxyxe on compute-2
Jan 21 11:06:04 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.kdxyxe on compute-2
Jan 21 11:06:04 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.0 deep-scrub starts
Jan 21 11:06:04 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.0 deep-scrub ok
Jan 21 11:06:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 21 11:06:04 np0005590810 python3[91760]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:04 np0005590810 podman[91761]: 2026-01-21 16:06:04.320621617 +0000 UTC m=+0.061600024 container create f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223 (image=quay.io/ceph/ceph:v19, name=hopeful_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 21 11:06:04 np0005590810 systemd[1]: Started libpod-conmon-f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223.scope.
Jan 21 11:06:04 np0005590810 podman[91761]: 2026-01-21 16:06:04.289050914 +0000 UTC m=+0.030029381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:04 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3dcdb208d3fc55138b2ff8af42645898647a0bcae2bf455a606deae3dd6171c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3dcdb208d3fc55138b2ff8af42645898647a0bcae2bf455a606deae3dd6171c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:04 np0005590810 podman[91761]: 2026-01-21 16:06:04.421173945 +0000 UTC m=+0.162152332 container init f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223 (image=quay.io/ceph/ceph:v19, name=hopeful_shannon, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:04 np0005590810 podman[91761]: 2026-01-21 16:06:04.427375216 +0000 UTC m=+0.168353623 container start f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223 (image=quay.io/ceph/ceph:v19, name=hopeful_shannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:04 np0005590810 podman[91761]: 2026-01-21 16:06:04.432248861 +0000 UTC m=+0.173227258 container attach f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223 (image=quay.io/ceph/ceph:v19, name=hopeful_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdxyxe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdxyxe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398367144' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 21 11:06:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2398367144' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 21 11:06:04 np0005590810 systemd[1]: libpod-f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223.scope: Deactivated successfully.
Jan 21 11:06:04 np0005590810 podman[91761]: 2026-01-21 16:06:04.963722966 +0000 UTC m=+0.704701333 container died f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223 (image=quay.io/ceph/ceph:v19, name=hopeful_shannon, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:04 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d3dcdb208d3fc55138b2ff8af42645898647a0bcae2bf455a606deae3dd6171c-merged.mount: Deactivated successfully.
Jan 21 11:06:04 np0005590810 podman[91761]: 2026-01-21 16:06:04.997714711 +0000 UTC m=+0.738693078 container remove f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223 (image=quay.io/ceph/ceph:v19, name=hopeful_shannon, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:06:05 np0005590810 systemd[1]: libpod-conmon-f5565a4649188632651ab1f974407d18da4ab4e27f16921adb76e93b2b246223.scope: Deactivated successfully.
Jan 21 11:06:05 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Jan 21 11:06:05 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Jan 21 11:06:05 np0005590810 ceph-mon[74380]: Deploying daemon mgr.compute-2.kdxyxe on compute-2
Jan 21 11:06:05 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2398367144' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 21 11:06:05 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/2398367144' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 21 11:06:05 np0005590810 python3[91839]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:05 np0005590810 podman[91841]: 2026-01-21 16:06:05.791532912 +0000 UTC m=+0.045645953 container create 25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe (image=quay.io/ceph/ceph:v19, name=objective_swirles, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:06:05 np0005590810 systemd[1]: Started libpod-conmon-25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe.scope.
Jan 21 11:06:05 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:05 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03bda7edfc3291aa1a540544c4ee2b31fd9917440164e624fc1491857090b9c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:05 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03bda7edfc3291aa1a540544c4ee2b31fd9917440164e624fc1491857090b9c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:05 np0005590810 podman[91841]: 2026-01-21 16:06:05.859243612 +0000 UTC m=+0.113356683 container init 25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe (image=quay.io/ceph/ceph:v19, name=objective_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:06:05 np0005590810 podman[91841]: 2026-01-21 16:06:05.864378958 +0000 UTC m=+0.118491999 container start 25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe (image=quay.io/ceph/ceph:v19, name=objective_swirles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:06:05 np0005590810 podman[91841]: 2026-01-21 16:06:05.867274216 +0000 UTC m=+0.121387257 container attach 25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe (image=quay.io/ceph/ceph:v19, name=objective_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:05 np0005590810 podman[91841]: 2026-01-21 16:06:05.776451159 +0000 UTC m=+0.030564230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:06 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 21 11:06:06 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 21 11:06:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643714572' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 11:06:06 np0005590810 objective_swirles[91857]: 
Jan 21 11:06:06 np0005590810 objective_swirles[91857]: {"fsid":"d9745984-fea8-5195-8ec5-61f685b5c785","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":30,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":43,"num_osds":2,"num_up_osds":2,"osd_up_since":1769011487,"num_in_osds":2,"osd_in_since":1769011468,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194}],"num_pgs":194,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":56307712,"bytes_avail":42884976640,"bytes_total":42941284352,"read_bytes_sec":30031,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2026-01-21T16:05:57:396348+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":2,"modified":"2026-01-21T16:04:17.975788+0000","services":{}},"progress_events":{"60c4767d-1c36-4690-8fbd-69964c205c71":{"message":"Updating mgr deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 21 11:06:06 np0005590810 systemd[1]: libpod-25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe.scope: Deactivated successfully.
Jan 21 11:06:06 np0005590810 podman[91841]: 2026-01-21 16:06:06.323623197 +0000 UTC m=+0.577736288 container died 25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe (image=quay.io/ceph/ceph:v19, name=objective_swirles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:06:06 np0005590810 systemd[1]: var-lib-containers-storage-overlay-03bda7edfc3291aa1a540544c4ee2b31fd9917440164e624fc1491857090b9c4-merged.mount: Deactivated successfully.
Jan 21 11:06:06 np0005590810 podman[91841]: 2026-01-21 16:06:06.364704192 +0000 UTC m=+0.618817233 container remove 25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe (image=quay.io/ceph/ceph:v19, name=objective_swirles, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:06:06 np0005590810 systemd[1]: libpod-conmon-25167ee2953d5e0bcba2e88ef0083a01787899c893a55f7df27f894b8107ebbe.scope: Deactivated successfully.
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.oewgcf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oewgcf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oewgcf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:06 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.oewgcf on compute-1
Jan 21 11:06:06 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.oewgcf on compute-1
Jan 21 11:06:07 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 21 11:06:07 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 21 11:06:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:07 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:07 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oewgcf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:06:07 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oewgcf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 11:06:07 np0005590810 ceph-mon[74380]: Deploying daemon mgr.compute-1.oewgcf on compute-1
Jan 21 11:06:08 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 21 11:06:08 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 21 11:06:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 0 B/s wr, 9 op/s
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:08 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 60c4767d-1c36-4690-8fbd-69964c205c71 (Updating mgr deployment (+2 -> 3))
Jan 21 11:06:08 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 60c4767d-1c36-4690-8fbd-69964c205c71 (Updating mgr deployment (+2 -> 3)) in 5 seconds
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:08 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 23ddc48d-0075-4d7a-be6b-fef5361bb3ef (Updating mon deployment (+1 -> 3))
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:08 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 21 11:06:08 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:06:09 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 21 11:06:09 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 21 11:06:09 np0005590810 ceph-mon[74380]: Deploying daemon mon.compute-1 on compute-1
Jan 21 11:06:10 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 21 11:06:10 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 21 11:06:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 0 B/s wr, 8 op/s
Jan 21 11:06:11 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 21 11:06:11 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 9 completed events
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 23ddc48d-0075-4d7a-be6b-fef5361bb3ef (Updating mon deployment (+1 -> 3))
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 23ddc48d-0075-4d7a-be6b-fef5361bb3ef (Updating mon deployment (+1 -> 3)) in 3 seconds
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 2ef6b128-292f-4016-8e76-78305295f883 (Updating node-exporter deployment (+3 -> 3))
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2422729452; not ready for session (expect reconnect)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 11:06:11 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 21 11:06:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:06:12 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 21 11:06:12 np0005590810 systemd[1]: Reloading.
Jan 21 11:06:12 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 21 11:06:12 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:06:12 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:06:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v15: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Jan 21 11:06:12 np0005590810 systemd[1]: Reloading.
Jan 21 11:06:12 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:06:12 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:06:12 np0005590810 systemd[1]: Starting Ceph node-exporter.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:06:12 np0005590810 bash[92109]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Jan 21 11:06:12 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2422729452; not ready for session (expect reconnect)
Jan 21 11:06:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:06:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:06:12 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 11:06:13 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 21 11:06:13 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 21 11:06:13 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/4002132825; not ready for session (expect reconnect)
Jan 21 11:06:13 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2422729452; not ready for session (expect reconnect)
Jan 21 11:06:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:06:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:06:13 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 11:06:14 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 21 11:06:14 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 21 11:06:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:14 np0005590810 bash[92109]: Getting image source signatures
Jan 21 11:06:14 np0005590810 bash[92109]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Jan 21 11:06:14 np0005590810 bash[92109]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Jan 21 11:06:14 np0005590810 bash[92109]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Jan 21 11:06:14 np0005590810 bash[92109]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Jan 21 11:06:14 np0005590810 bash[92109]: Writing manifest to image destination
Jan 21 11:06:14 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/4002132825; not ready for session (expect reconnect)
Jan 21 11:06:14 np0005590810 podman[92109]: 2026-01-21 16:06:14.844095962 +0000 UTC m=+2.065077384 container create 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:06:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44f01cecc347b1f4a1e92ec5b232299434fb9292d7682b75e0cc53326f6bbc38/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:14 np0005590810 podman[92109]: 2026-01-21 16:06:14.899923918 +0000 UTC m=+2.120905390 container init 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:06:14 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2422729452; not ready for session (expect reconnect)
Jan 21 11:06:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:06:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:06:14 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 11:06:14 np0005590810 podman[92109]: 2026-01-21 16:06:14.831083705 +0000 UTC m=+2.052065147 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Jan 21 11:06:14 np0005590810 podman[92109]: 2026-01-21 16:06:14.90475316 +0000 UTC m=+2.125734602 container start 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:06:14 np0005590810 bash[92109]: 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.913Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.913Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.914Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.914Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=arp
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=bcache
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=bonding
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=cpu
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=dmi
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=edac
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=entropy
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=filefd
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=hwmon
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=netclass
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=netdev
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=netstat
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=nfs
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=nvme
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=os
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=pressure
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=rapl
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=selinux
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=softnet
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=stat
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=textfile
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=thermal_zone
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=time
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=uname
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=xfs
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.915Z caller=node_exporter.go:117 level=info collector=zfs
Jan 21 11:06:14 np0005590810 systemd[1]: Started Ceph node-exporter.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.916Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Jan 21 11:06:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0[92185]: ts=2026-01-21T16:06:14.916Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Jan 21 11:06:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:06:15 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 21 11:06:15 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 21 11:06:15 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/4002132825; not ready for session (expect reconnect)
Jan 21 11:06:15 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2422729452; not ready for session (expect reconnect)
Jan 21 11:06:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:06:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:06:15 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 11:06:15 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 21 11:06:15 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 21 11:06:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:16 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 10 completed events
Jan 21 11:06:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:06:16 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/4002132825; not ready for session (expect reconnect)
Jan 21 11:06:16 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2422729452; not ready for session (expect reconnect)
Jan 21 11:06:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:06:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:06:16 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 11:06:16 np0005590810 ceph-mon[74380]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 21 11:06:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:06:16 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Jan 21 11:06:16 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Jan 21 11:06:16 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T16:06:11.900214+0000
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : created 2026-01-21T16:02:46.356140+0000
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e43: 2 total, 2 up, 2 in
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.ygffhs(active, since 20s)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Jan 21 11:06:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0[74376]: 2026-01-21T16:06:17.102+0000 7f56a7419640 -1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Jan 21 11:06:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0[74376]: 2026-01-21T16:06:17.102+0000 7f56a7419640 -1 log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 21 11:06:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0[74376]: 2026-01-21T16:06:17.102+0000 7f56a7419640 -1 log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Jan 21 11:06:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0[74376]: 2026-01-21T16:06:17.102+0000 7f56a7419640 -1 log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 21 11:06:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0[74376]: 2026-01-21T16:06:17.102+0000 7f56a7419640 -1 log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdxyxe started
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-0 calling monitor election
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-2 calling monitor election
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-1 calling monitor election
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 21 11:06:17 np0005590810 ceph-mon[74380]:    fs cephfs is offline because no MDS is active for it.
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 21 11:06:17 np0005590810 ceph-mon[74380]:    fs cephfs has 0 MDS online, but wants 1
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.ygffhs(active, since 21s), standbys: compute-2.kdxyxe
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.kdxyxe", "id": "compute-2.kdxyxe"} v 0)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kdxyxe", "id": "compute-2.kdxyxe"}]: dispatch
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:17 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Jan 21 11:06:17 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:17 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2422729452; not ready for session (expect reconnect)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:06:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:06:17 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Jan 21 11:06:17 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Jan 21 11:06:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:18 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:18 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:18 np0005590810 ceph-mon[74380]: Deploying daemon node-exporter.compute-1 on compute-1
Jan 21 11:06:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:06:18.904+0000 7efee3954640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 21 11:06:18 np0005590810 ceph-mgr[74671]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 21 11:06:18 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 21 11:06:18 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 21 11:06:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.oewgcf started
Jan 21 11:06:19 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-1.oewgcf 192.168.122.101:0/2304298412; not ready for session (expect reconnect)
Jan 21 11:06:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.ygffhs(active, since 23s), standbys: compute-1.oewgcf, compute-2.kdxyxe
Jan 21 11:06:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.oewgcf", "id": "compute-1.oewgcf"} v 0)
Jan 21 11:06:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-1.oewgcf", "id": "compute-1.oewgcf"}]: dispatch
Jan 21 11:06:19 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 21 11:06:19 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 21 11:06:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v19: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:06:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:06:21 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 21 11:06:21 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 21 11:06:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 21 11:06:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:21 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Jan 21 11:06:21 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Jan 21 11:06:21 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:21 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:21 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:21 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 21 11:06:22 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 21 11:06:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:22 np0005590810 ceph-mon[74380]: Deploying daemon node-exporter.compute-2 on compute-2
Jan 21 11:06:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:23 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 21 11:06:23 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 21 11:06:24 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 21 11:06:24 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 21 11:06:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v21: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:25 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 21 11:06:25 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 21 11:06:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:06:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 2ef6b128-292f-4016-8e76-78305295f883 (Updating node-exporter deployment (+3 -> 3))
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 2ef6b128-292f-4016-8e76-78305295f883 (Updating node-exporter deployment (+3 -> 3)) in 14 seconds
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:26 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 21 11:06:26 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v22: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:06:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:06:26 np0005590810 podman[92311]: 2026-01-21 16:06:26.548803058 +0000 UTC m=+0.039003902 container create 0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:06:26 np0005590810 systemd[1]: Started libpod-conmon-0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc.scope.
Jan 21 11:06:26 np0005590810 python3[92308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:26 np0005590810 podman[92311]: 2026-01-21 16:06:26.532464569 +0000 UTC m=+0.022665433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:26 np0005590810 podman[92311]: 2026-01-21 16:06:26.635347006 +0000 UTC m=+0.125547900 container init 0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:06:26 np0005590810 podman[92311]: 2026-01-21 16:06:26.650474334 +0000 UTC m=+0.140675178 container start 0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:26 np0005590810 podman[92311]: 2026-01-21 16:06:26.654171609 +0000 UTC m=+0.144372553 container attach 0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:26 np0005590810 heuristic_ritchie[92327]: 167 167
Jan 21 11:06:26 np0005590810 systemd[1]: libpod-0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc.scope: Deactivated successfully.
Jan 21 11:06:26 np0005590810 conmon[92327]: conmon 0f281c75c28fceabaf60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc.scope/container/memory.events
Jan 21 11:06:26 np0005590810 podman[92311]: 2026-01-21 16:06:26.661462394 +0000 UTC m=+0.151663238 container died 0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:26 np0005590810 podman[92331]: 2026-01-21 16:06:26.681350932 +0000 UTC m=+0.047234558 container create f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1 (image=quay.io/ceph/ceph:v19, name=wizardly_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ae41c167e5a5396d805a1d15a094d87377da548df22e86d8de39f273f04f5f25-merged.mount: Deactivated successfully.
Jan 21 11:06:26 np0005590810 podman[92311]: 2026-01-21 16:06:26.710682027 +0000 UTC m=+0.200882871 container remove 0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:06:26 np0005590810 systemd[1]: Started libpod-conmon-f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1.scope.
Jan 21 11:06:26 np0005590810 systemd[1]: libpod-conmon-0f281c75c28fceabaf607cfce588a36567fea16847ea705b26b2fcadd18a94fc.scope: Deactivated successfully.
Jan 21 11:06:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79fdd42cc8f0d4630f602250cdf3de11d2ea38f9f0df7eb081104bc40048b79a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79fdd42cc8f0d4630f602250cdf3de11d2ea38f9f0df7eb081104bc40048b79a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:26 np0005590810 podman[92331]: 2026-01-21 16:06:26.664728364 +0000 UTC m=+0.030612010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:26 np0005590810 podman[92331]: 2026-01-21 16:06:26.760783891 +0000 UTC m=+0.126667547 container init f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1 (image=quay.io/ceph/ceph:v19, name=wizardly_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:06:26 np0005590810 podman[92331]: 2026-01-21 16:06:26.766367578 +0000 UTC m=+0.132251204 container start f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1 (image=quay.io/ceph/ceph:v19, name=wizardly_ride, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 11:06:26 np0005590810 podman[92331]: 2026-01-21 16:06:26.786327949 +0000 UTC m=+0.152211575 container attach f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1 (image=quay.io/ceph/ceph:v19, name=wizardly_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Jan 21 11:06:26 np0005590810 podman[92372]: 2026-01-21 16:06:26.857199321 +0000 UTC m=+0.040038627 container create 4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_williams, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:26 np0005590810 systemd[1]: Started libpod-conmon-4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb.scope.
Jan 21 11:06:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c6cd18506058b4d177c2606016f586d19642386e27b9411cf7d6533b1007c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c6cd18506058b4d177c2606016f586d19642386e27b9411cf7d6533b1007c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c6cd18506058b4d177c2606016f586d19642386e27b9411cf7d6533b1007c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c6cd18506058b4d177c2606016f586d19642386e27b9411cf7d6533b1007c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c6cd18506058b4d177c2606016f586d19642386e27b9411cf7d6533b1007c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:26 np0005590810 podman[92372]: 2026-01-21 16:06:26.839077172 +0000 UTC m=+0.021916488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:26 np0005590810 podman[92372]: 2026-01-21 16:06:26.939951601 +0000 UTC m=+0.122790907 container init 4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 21 11:06:26 np0005590810 podman[92372]: 2026-01-21 16:06:26.946274204 +0000 UTC m=+0.129113500 container start 4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:06:26 np0005590810 podman[92372]: 2026-01-21 16:06:26.949346027 +0000 UTC m=+0.132185323 container attach 4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:06:27 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 21 11:06:27 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 21 11:06:27 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 11 completed events
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006722466' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 11:06:27 np0005590810 wizardly_ride[92363]: 
Jan 21 11:06:27 np0005590810 wizardly_ride[92363]: {"fsid":"d9745984-fea8-5195-8ec5-61f685b5c785","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":10,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":43,"num_osds":2,"num_up_osds":2,"osd_up_since":1769011487,"num_in_osds":2,"osd_in_since":1769011468,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194}],"num_pgs":194,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":56348672,"bytes_avail":42884935680,"bytes_total":42941284352},"fsmap":{"epoch":2,"btime":"2026-01-21T16:05:57:396348+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":2,"modified":"2026-01-21T16:04:17.975788+0000","services":{}},"progress_events":{}}
Jan 21 11:06:27 np0005590810 systemd[1]: libpod-f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1.scope: Deactivated successfully.
Jan 21 11:06:27 np0005590810 podman[92331]: 2026-01-21 16:06:27.207656698 +0000 UTC m=+0.573540344 container died f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1 (image=quay.io/ceph/ceph:v19, name=wizardly_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:27 np0005590810 quizzical_williams[92407]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:06:27 np0005590810 quizzical_williams[92407]: --> All data devices are unavailable
Jan 21 11:06:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-79fdd42cc8f0d4630f602250cdf3de11d2ea38f9f0df7eb081104bc40048b79a-merged.mount: Deactivated successfully.
Jan 21 11:06:27 np0005590810 systemd[1]: libpod-4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb.scope: Deactivated successfully.
Jan 21 11:06:27 np0005590810 podman[92331]: 2026-01-21 16:06:27.256002642 +0000 UTC m=+0.621886268 container remove f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1 (image=quay.io/ceph/ceph:v19, name=wizardly_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:06:27 np0005590810 podman[92372]: 2026-01-21 16:06:27.260658349 +0000 UTC m=+0.443497645 container died 4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:06:27 np0005590810 systemd[1]: libpod-conmon-f1fe163b2c16eee39e4d7f971d5168d8bdbc8ee899271a3851b05eaadf3386d1.scope: Deactivated successfully.
Jan 21 11:06:27 np0005590810 podman[92372]: 2026-01-21 16:06:27.307877535 +0000 UTC m=+0.490716831 container remove 4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:27 np0005590810 systemd[1]: libpod-conmon-4d2b06648056a08cc48ba868c8996d606ddd0506729f3cdaadfa761d34e062cb.scope: Deactivated successfully.
Jan 21 11:06:27 np0005590810 python3[92499]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b5c6cd18506058b4d177c2606016f586d19642386e27b9411cf7d6533b1007c6-merged.mount: Deactivated successfully.
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:27 np0005590810 podman[92523]: 2026-01-21 16:06:27.634079417 +0000 UTC m=+0.057213554 container create ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df (image=quay.io/ceph/ceph:v19, name=intelligent_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:27 np0005590810 systemd[1]: Started libpod-conmon-ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df.scope.
Jan 21 11:06:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d35e0187e5cccb440e02e8a494ad6659b9edb850b0c2f48f0cf7e1abd9f3151/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d35e0187e5cccb440e02e8a494ad6659b9edb850b0c2f48f0cf7e1abd9f3151/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:27 np0005590810 podman[92523]: 2026-01-21 16:06:27.613735163 +0000 UTC m=+0.036869320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:27 np0005590810 podman[92523]: 2026-01-21 16:06:27.711503538 +0000 UTC m=+0.134637695 container init ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df (image=quay.io/ceph/ceph:v19, name=intelligent_franklin, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:27 np0005590810 podman[92523]: 2026-01-21 16:06:27.7174941 +0000 UTC m=+0.140628227 container start ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df (image=quay.io/ceph/ceph:v19, name=intelligent_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 11:06:27 np0005590810 podman[92523]: 2026-01-21 16:06:27.720938676 +0000 UTC m=+0.144072833 container attach ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df (image=quay.io/ceph/ceph:v19, name=intelligent_franklin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "ddf1dd38-dfbb-4c43-8183-adc037e53029"} v 0)
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ddf1dd38-dfbb-4c43-8183-adc037e53029"}]: dispatch
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 21 11:06:27 np0005590810 podman[92580]: 2026-01-21 16:06:27.820438689 +0000 UTC m=+0.035594847 container create f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ddf1dd38-dfbb-4c43-8183-adc037e53029"}]': finished
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e44 e44: 3 total, 2 up, 3 in
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 2 up, 3 in
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:27 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:27 np0005590810 systemd[1]: Started libpod-conmon-f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349.scope.
Jan 21 11:06:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:27 np0005590810 podman[92580]: 2026-01-21 16:06:27.895781481 +0000 UTC m=+0.110937669 container init f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:27 np0005590810 podman[92580]: 2026-01-21 16:06:27.804283396 +0000 UTC m=+0.019439574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:27 np0005590810 podman[92580]: 2026-01-21 16:06:27.900740557 +0000 UTC m=+0.115896715 container start f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 21 11:06:27 np0005590810 podman[92580]: 2026-01-21 16:06:27.90379592 +0000 UTC m=+0.118952098 container attach f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:27 np0005590810 infallible_goldstine[92615]: 167 167
Jan 21 11:06:27 np0005590810 podman[92580]: 2026-01-21 16:06:27.90498073 +0000 UTC m=+0.120136888 container died f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:06:27 np0005590810 systemd[1]: libpod-f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349.scope: Deactivated successfully.
Jan 21 11:06:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d934fd098e729c0fe381f0c78d0a5e0f6148cf13aa30d4ba3e195699e778e93e-merged.mount: Deactivated successfully.
Jan 21 11:06:27 np0005590810 podman[92580]: 2026-01-21 16:06:27.941840039 +0000 UTC m=+0.156996207 container remove f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:06:27 np0005590810 systemd[1]: libpod-conmon-f2ffddb9908b6369acbc25e775394b17fd11aa43ba3ba9c0b164d456f5edd349.scope: Deactivated successfully.
Jan 21 11:06:28 np0005590810 podman[92639]: 2026-01-21 16:06:28.118648789 +0000 UTC m=+0.039209268 container create dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:06:28 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 21 11:06:28 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 21 11:06:28 np0005590810 systemd[1]: Started libpod-conmon-dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e.scope.
Jan 21 11:06:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 11:06:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216667552' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 11:06:28 np0005590810 intelligent_franklin[92550]: 
Jan 21 11:06:28 np0005590810 intelligent_franklin[92550]: {"epoch":3,"fsid":"d9745984-fea8-5195-8ec5-61f685b5c785","modified":"2026-01-21T16:06:11.900214Z","created":"2026-01-21T16:02:46.356140Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 21 11:06:28 np0005590810 intelligent_franklin[92550]: dumped monmap epoch 3
Jan 21 11:06:28 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:28 np0005590810 systemd[1]: libpod-ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df.scope: Deactivated successfully.
Jan 21 11:06:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31f99a685fdafaf371208c1618f290d2b4d8d5974b7ff4d34bbc18271912860/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:28 np0005590810 podman[92523]: 2026-01-21 16:06:28.182503406 +0000 UTC m=+0.605637563 container died ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df (image=quay.io/ceph/ceph:v19, name=intelligent_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:06:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31f99a685fdafaf371208c1618f290d2b4d8d5974b7ff4d34bbc18271912860/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31f99a685fdafaf371208c1618f290d2b4d8d5974b7ff4d34bbc18271912860/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31f99a685fdafaf371208c1618f290d2b4d8d5974b7ff4d34bbc18271912860/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:28 np0005590810 podman[92639]: 2026-01-21 16:06:28.100675446 +0000 UTC m=+0.021235955 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:28 np0005590810 podman[92639]: 2026-01-21 16:06:28.199357702 +0000 UTC m=+0.119918191 container init dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_heyrovsky, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v24: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:28 np0005590810 podman[92639]: 2026-01-21 16:06:28.206311995 +0000 UTC m=+0.126872474 container start dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:28 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:28 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.102:0/665553271' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ddf1dd38-dfbb-4c43-8183-adc037e53029"}]: dispatch
Jan 21 11:06:28 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ddf1dd38-dfbb-4c43-8183-adc037e53029"}]: dispatch
Jan 21 11:06:28 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ddf1dd38-dfbb-4c43-8183-adc037e53029"}]': finished
Jan 21 11:06:28 np0005590810 podman[92639]: 2026-01-21 16:06:28.228863803 +0000 UTC m=+0.149424282 container attach dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_heyrovsky, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:06:28 np0005590810 podman[92523]: 2026-01-21 16:06:28.238617691 +0000 UTC m=+0.661751828 container remove ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df (image=quay.io/ceph/ceph:v19, name=intelligent_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:28 np0005590810 systemd[1]: libpod-conmon-ab01337e4ad065d5a868780aadfdc3d09df331fa81ef130a003415fdc2d681df.scope: Deactivated successfully.
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]: {
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:    "0": [
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:        {
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "devices": [
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "/dev/loop3"
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            ],
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "lv_name": "ceph_lv0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "lv_size": "21470642176",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "name": "ceph_lv0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "tags": {
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.cluster_name": "ceph",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.crush_device_class": "",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.encrypted": "0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.osd_id": "0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.type": "block",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.vdo": "0",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:                "ceph.with_tpm": "0"
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            },
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "type": "block",
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:            "vg_name": "ceph_vg0"
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:        }
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]:    ]
Jan 21 11:06:28 np0005590810 eloquent_heyrovsky[92655]: }
Jan 21 11:06:28 np0005590810 systemd[1]: libpod-dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e.scope: Deactivated successfully.
Jan 21 11:06:28 np0005590810 podman[92639]: 2026-01-21 16:06:28.49895719 +0000 UTC m=+0.419517699 container died dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_heyrovsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:28 np0005590810 podman[92639]: 2026-01-21 16:06:28.539322185 +0000 UTC m=+0.459882664 container remove dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_heyrovsky, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 21 11:06:28 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c31f99a685fdafaf371208c1618f290d2b4d8d5974b7ff4d34bbc18271912860-merged.mount: Deactivated successfully.
Jan 21 11:06:28 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8d35e0187e5cccb440e02e8a494ad6659b9edb850b0c2f48f0cf7e1abd9f3151-merged.mount: Deactivated successfully.
Jan 21 11:06:28 np0005590810 systemd[1]: libpod-conmon-dc7aed2d8a209045385c127a91ee59d3bbb8d96f51729694f4159ea222dc805e.scope: Deactivated successfully.
Jan 21 11:06:28 np0005590810 python3[92744]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:28 np0005590810 podman[92766]: 2026-01-21 16:06:28.877367535 +0000 UTC m=+0.038407932 container create 66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb (image=quay.io/ceph/ceph:v19, name=jovial_meitner, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:28 np0005590810 systemd[1]: Started libpod-conmon-66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb.scope.
Jan 21 11:06:28 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd9725a350b44626f8c5ffcca6a1f9d45a0a37e2d65ced22159374d92d95b57/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd9725a350b44626f8c5ffcca6a1f9d45a0a37e2d65ced22159374d92d95b57/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:28 np0005590810 podman[92766]: 2026-01-21 16:06:28.948426323 +0000 UTC m=+0.109466750 container init 66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb (image=quay.io/ceph/ceph:v19, name=jovial_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 21 11:06:28 np0005590810 podman[92766]: 2026-01-21 16:06:28.954260679 +0000 UTC m=+0.115301076 container start 66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb (image=quay.io/ceph/ceph:v19, name=jovial_meitner, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:06:28 np0005590810 podman[92766]: 2026-01-21 16:06:28.862381022 +0000 UTC m=+0.023421439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:28 np0005590810 podman[92766]: 2026-01-21 16:06:28.957835499 +0000 UTC m=+0.118875896 container attach 66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb (image=quay.io/ceph/ceph:v19, name=jovial_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:29 np0005590810 podman[92835]: 2026-01-21 16:06:29.10427822 +0000 UTC m=+0.037322615 container create ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 21 11:06:29 np0005590810 systemd[1]: Started libpod-conmon-ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4.scope.
Jan 21 11:06:29 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:29 np0005590810 podman[92835]: 2026-01-21 16:06:29.159200816 +0000 UTC m=+0.092245221 container init ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:06:29 np0005590810 podman[92835]: 2026-01-21 16:06:29.164343359 +0000 UTC m=+0.097387744 container start ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:29 np0005590810 hopeful_burnell[92860]: 167 167
Jan 21 11:06:29 np0005590810 systemd[1]: libpod-ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4.scope: Deactivated successfully.
Jan 21 11:06:29 np0005590810 podman[92835]: 2026-01-21 16:06:29.169109838 +0000 UTC m=+0.102154243 container attach ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 21 11:06:29 np0005590810 podman[92835]: 2026-01-21 16:06:29.169420109 +0000 UTC m=+0.102464494 container died ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:29 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 21 11:06:29 np0005590810 podman[92835]: 2026-01-21 16:06:29.088173979 +0000 UTC m=+0.021218384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:29 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 21 11:06:29 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6a03d4d9b068a0e6a3f17cd0fe7cfdbf1a5fdbeed174dbf4b9a7f3d660923310-merged.mount: Deactivated successfully.
Jan 21 11:06:29 np0005590810 podman[92835]: 2026-01-21 16:06:29.207642123 +0000 UTC m=+0.140686508 container remove ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:29 np0005590810 systemd[1]: libpod-conmon-ba055d51f95bb379eeed0953e9c56cfbd49bdc194fe933760300ba111979a8e4.scope: Deactivated successfully.
Jan 21 11:06:29 np0005590810 podman[92885]: 2026-01-21 16:06:29.344339627 +0000 UTC m=+0.035074110 container create a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:06:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 21 11:06:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/79366990' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 21 11:06:29 np0005590810 jovial_meitner[92783]: [client.openstack]
Jan 21 11:06:29 np0005590810 jovial_meitner[92783]: #011key = AQB7+HBpAAAAABAA54R49c/JuD6hYSLKjoU2sg==
Jan 21 11:06:29 np0005590810 jovial_meitner[92783]: #011caps mgr = "allow *"
Jan 21 11:06:29 np0005590810 jovial_meitner[92783]: #011caps mon = "profile rbd"
Jan 21 11:06:29 np0005590810 jovial_meitner[92783]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 21 11:06:29 np0005590810 systemd[1]: Started libpod-conmon-a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf.scope.
Jan 21 11:06:29 np0005590810 podman[92766]: 2026-01-21 16:06:29.400052109 +0000 UTC m=+0.561092506 container died 66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb (image=quay.io/ceph/ceph:v19, name=jovial_meitner, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:29 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:29 np0005590810 systemd[1]: libpod-66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb.scope: Deactivated successfully.
Jan 21 11:06:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424b100d7292debf51389d7ee4eb12b07c3d89930c99ad00a9b4fa70844c278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424b100d7292debf51389d7ee4eb12b07c3d89930c99ad00a9b4fa70844c278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424b100d7292debf51389d7ee4eb12b07c3d89930c99ad00a9b4fa70844c278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424b100d7292debf51389d7ee4eb12b07c3d89930c99ad00a9b4fa70844c278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:29 np0005590810 podman[92885]: 2026-01-21 16:06:29.329383014 +0000 UTC m=+0.020117497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:29 np0005590810 podman[92885]: 2026-01-21 16:06:29.427127329 +0000 UTC m=+0.117861842 container init a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:06:29 np0005590810 podman[92885]: 2026-01-21 16:06:29.434794986 +0000 UTC m=+0.125529469 container start a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Jan 21 11:06:29 np0005590810 podman[92885]: 2026-01-21 16:06:29.442833156 +0000 UTC m=+0.133567639 container attach a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:06:29 np0005590810 podman[92766]: 2026-01-21 16:06:29.448938221 +0000 UTC m=+0.609978618 container remove 66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb (image=quay.io/ceph/ceph:v19, name=jovial_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:06:29 np0005590810 systemd[1]: libpod-conmon-66d9c4768527998ef508c4fc1b213b04522cd11958b83362951bfefc738699eb.scope: Deactivated successfully.
Jan 21 11:06:29 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dcd9725a350b44626f8c5ffcca6a1f9d45a0a37e2d65ced22159374d92d95b57-merged.mount: Deactivated successfully.
Jan 21 11:06:30 np0005590810 lvm[92987]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:06:30 np0005590810 lvm[92987]: VG ceph_vg0 finished
Jan 21 11:06:30 np0005590810 quizzical_panini[92903]: {}
Jan 21 11:06:30 np0005590810 systemd[1]: libpod-a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf.scope: Deactivated successfully.
Jan 21 11:06:30 np0005590810 podman[92885]: 2026-01-21 16:06:30.120331332 +0000 UTC m=+0.811065835 container died a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_panini, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:30 np0005590810 systemd[1]: libpod-a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf.scope: Consumed 1.009s CPU time.
Jan 21 11:06:30 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 21 11:06:30 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 21 11:06:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v25: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:30 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8424b100d7292debf51389d7ee4eb12b07c3d89930c99ad00a9b4fa70844c278-merged.mount: Deactivated successfully.
Jan 21 11:06:30 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/79366990' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 21 11:06:30 np0005590810 podman[92885]: 2026-01-21 16:06:30.244031349 +0000 UTC m=+0.934765832 container remove a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:30 np0005590810 systemd[1]: libpod-conmon-a53e3d04367ec5a3a94558c1dcb26d380a7364d8bea502e09185b65042e53bdf.scope: Deactivated successfully.
Jan 21 11:06:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:06:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:06:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:30 np0005590810 ansible-async_wrapper.py[93154]: Invoked with j596814302297 30 /home/zuul/.ansible/tmp/ansible-tmp-1769011590.5224326-37640-138260438235887/AnsiballZ_command.py _
Jan 21 11:06:30 np0005590810 ansible-async_wrapper.py[93157]: Starting module and watcher
Jan 21 11:06:30 np0005590810 ansible-async_wrapper.py[93157]: Start watching 93158 (30)
Jan 21 11:06:30 np0005590810 ansible-async_wrapper.py[93158]: Start module (93158)
Jan 21 11:06:30 np0005590810 ansible-async_wrapper.py[93154]: Return async_wrapper task started.
Jan 21 11:06:31 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 21 11:06:31 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 21 11:06:31 np0005590810 python3[93159]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:31 np0005590810 podman[93160]: 2026-01-21 16:06:31.21064663 +0000 UTC m=+0.054196392 container create d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8 (image=quay.io/ceph/ceph:v19, name=sleepy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 21 11:06:31 np0005590810 systemd[1]: Started libpod-conmon-d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8.scope.
Jan 21 11:06:31 np0005590810 podman[93160]: 2026-01-21 16:06:31.185146713 +0000 UTC m=+0.028696505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:31 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62399b693c7071e9b39d7d15492ac60219fbce6a7cc9889cf642355da6f73380/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62399b693c7071e9b39d7d15492ac60219fbce6a7cc9889cf642355da6f73380/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:31 np0005590810 podman[93160]: 2026-01-21 16:06:31.306562564 +0000 UTC m=+0.150112356 container init d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8 (image=quay.io/ceph/ceph:v19, name=sleepy_hypatia, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:06:31 np0005590810 podman[93160]: 2026-01-21 16:06:31.31390196 +0000 UTC m=+0.157451722 container start d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8 (image=quay.io/ceph/ceph:v19, name=sleepy_hypatia, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:31 np0005590810 podman[93160]: 2026-01-21 16:06:31.317144989 +0000 UTC m=+0.160694771 container attach d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8 (image=quay.io/ceph/ceph:v19, name=sleepy_hypatia, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:31 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:31 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:31 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14358 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 11:06:31 np0005590810 sleepy_hypatia[93175]: 
Jan 21 11:06:31 np0005590810 sleepy_hypatia[93175]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 11:06:31 np0005590810 systemd[1]: libpod-d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8.scope: Deactivated successfully.
Jan 21 11:06:31 np0005590810 podman[93160]: 2026-01-21 16:06:31.712098261 +0000 UTC m=+0.555648013 container died d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8 (image=quay.io/ceph/ceph:v19, name=sleepy_hypatia, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:06:31 np0005590810 systemd[1]: var-lib-containers-storage-overlay-62399b693c7071e9b39d7d15492ac60219fbce6a7cc9889cf642355da6f73380-merged.mount: Deactivated successfully.
Jan 21 11:06:31 np0005590810 podman[93160]: 2026-01-21 16:06:31.752503758 +0000 UTC m=+0.596053520 container remove d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8 (image=quay.io/ceph/ceph:v19, name=sleepy_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:06:31 np0005590810 systemd[1]: libpod-conmon-d00e50c65248212955253850302648f1a209930daf5d76cab1d02a0fe29d28f8.scope: Deactivated successfully.
Jan 21 11:06:31 np0005590810 ansible-async_wrapper.py[93158]: Module complete (93158)
Jan 21 11:06:32 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 21 11:06:32 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 21 11:06:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v26: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:32 np0005590810 python3[93260]: ansible-ansible.legacy.async_status Invoked with jid=j596814302297.93154 mode=status _async_dir=/root/.ansible_async
Jan 21 11:06:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:32 np0005590810 python3[93309]: ansible-ansible.legacy.async_status Invoked with jid=j596814302297.93154 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 11:06:33 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 21 11:06:33 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 21 11:06:33 np0005590810 python3[93335]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:33 np0005590810 podman[93336]: 2026-01-21 16:06:33.363594276 +0000 UTC m=+0.049794374 container create d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f (image=quay.io/ceph/ceph:v19, name=funny_kapitsa, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:06:33 np0005590810 systemd[1]: Started libpod-conmon-d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f.scope.
Jan 21 11:06:33 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cce3e71c37a51e477c77fef064d478de16d365f7b5b7f8a980980a7a6363188/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cce3e71c37a51e477c77fef064d478de16d365f7b5b7f8a980980a7a6363188/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:33 np0005590810 podman[93336]: 2026-01-21 16:06:33.341549616 +0000 UTC m=+0.027749744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:33 np0005590810 podman[93336]: 2026-01-21 16:06:33.438472432 +0000 UTC m=+0.124672620 container init d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f (image=quay.io/ceph/ceph:v19, name=funny_kapitsa, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:33 np0005590810 podman[93336]: 2026-01-21 16:06:33.44494747 +0000 UTC m=+0.131147578 container start d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f (image=quay.io/ceph/ceph:v19, name=funny_kapitsa, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:06:33 np0005590810 podman[93336]: 2026-01-21 16:06:33.448129197 +0000 UTC m=+0.134329355 container attach d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f (image=quay.io/ceph/ceph:v19, name=funny_kapitsa, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:33 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 11:06:33 np0005590810 funny_kapitsa[93351]: 
Jan 21 11:06:33 np0005590810 funny_kapitsa[93351]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 11:06:33 np0005590810 systemd[1]: libpod-d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f.scope: Deactivated successfully.
Jan 21 11:06:33 np0005590810 podman[93336]: 2026-01-21 16:06:33.858996623 +0000 UTC m=+0.545196731 container died d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f (image=quay.io/ceph/ceph:v19, name=funny_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:06:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-4cce3e71c37a51e477c77fef064d478de16d365f7b5b7f8a980980a7a6363188-merged.mount: Deactivated successfully.
Jan 21 11:06:34 np0005590810 podman[93336]: 2026-01-21 16:06:34.041969852 +0000 UTC m=+0.728169960 container remove d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f (image=quay.io/ceph/ceph:v19, name=funny_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:34 np0005590810 systemd[1]: libpod-conmon-d3c0994d46ec487cb181675b25416753086f5e6e1b18e55f54755f235b32031f.scope: Deactivated successfully.
Jan 21 11:06:34 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Jan 21 11:06:34 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Jan 21 11:06:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v27: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:34 np0005590810 python3[93414]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:35 np0005590810 podman[93415]: 2026-01-21 16:06:35.021638922 +0000 UTC m=+0.025762637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:35 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 21 11:06:35 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 21 11:06:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 21 11:06:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 21 11:06:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:35 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 21 11:06:35 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 21 11:06:35 np0005590810 podman[93415]: 2026-01-21 16:06:35.400339146 +0000 UTC m=+0.404462801 container create 664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb (image=quay.io/ceph/ceph:v19, name=gracious_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:06:35 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 21 11:06:35 np0005590810 systemd[1]: Started libpod-conmon-664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb.scope.
Jan 21 11:06:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded34af1e98e5c796141185eb3178f7a21747c66357cfc0ee2fe96d6c757e2b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded34af1e98e5c796141185eb3178f7a21747c66357cfc0ee2fe96d6c757e2b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:35 np0005590810 podman[93415]: 2026-01-21 16:06:35.881121773 +0000 UTC m=+0.885245448 container init 664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb (image=quay.io/ceph/ceph:v19, name=gracious_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:06:35 np0005590810 podman[93415]: 2026-01-21 16:06:35.886806624 +0000 UTC m=+0.890930279 container start 664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb (image=quay.io/ceph/ceph:v19, name=gracious_swartz, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 21 11:06:35 np0005590810 podman[93415]: 2026-01-21 16:06:35.890765897 +0000 UTC m=+0.894889592 container attach 664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb (image=quay.io/ceph/ceph:v19, name=gracious_swartz, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:06:35 np0005590810 ansible-async_wrapper.py[93157]: Done in kid B.
Jan 21 11:06:36 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Jan 21 11:06:36 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Jan 21 11:06:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v28: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:36 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 11:06:36 np0005590810 gracious_swartz[93430]: 
Jan 21 11:06:36 np0005590810 gracious_swartz[93430]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 21 11:06:36 np0005590810 systemd[1]: libpod-664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb.scope: Deactivated successfully.
Jan 21 11:06:36 np0005590810 podman[93415]: 2026-01-21 16:06:36.279700816 +0000 UTC m=+1.283824461 container died 664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb (image=quay.io/ceph/ceph:v19, name=gracious_swartz, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:06:36 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ded34af1e98e5c796141185eb3178f7a21747c66357cfc0ee2fe96d6c757e2b5-merged.mount: Deactivated successfully.
Jan 21 11:06:36 np0005590810 podman[93415]: 2026-01-21 16:06:36.314765725 +0000 UTC m=+1.318889370 container remove 664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb (image=quay.io/ceph/ceph:v19, name=gracious_swartz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:06:36 np0005590810 systemd[1]: libpod-conmon-664c232d543d1ba69e57b55a519ece22ab06ecf6dab386b2a1af1ced18757fdb.scope: Deactivated successfully.
Jan 21 11:06:36 np0005590810 ceph-mon[74380]: Deploying daemon osd.2 on compute-2
Jan 21 11:06:37 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 21 11:06:37 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 21 11:06:37 np0005590810 python3[93493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:37 np0005590810 podman[93494]: 2026-01-21 16:06:37.375021393 +0000 UTC m=+0.045279773 container create e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4 (image=quay.io/ceph/ceph:v19, name=angry_hodgkin, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 21 11:06:37 np0005590810 systemd[1]: Started libpod-conmon-e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4.scope.
Jan 21 11:06:37 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3725c7a2418b4a79d5972d0b272e90561e7d75a85f9a0acc88f6f6ff5873f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3725c7a2418b4a79d5972d0b272e90561e7d75a85f9a0acc88f6f6ff5873f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:37 np0005590810 podman[93494]: 2026-01-21 16:06:37.444194687 +0000 UTC m=+0.114453087 container init e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4 (image=quay.io/ceph/ceph:v19, name=angry_hodgkin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:06:37 np0005590810 podman[93494]: 2026-01-21 16:06:37.35588318 +0000 UTC m=+0.026141580 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:37 np0005590810 podman[93494]: 2026-01-21 16:06:37.449067291 +0000 UTC m=+0.119325671 container start e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4 (image=quay.io/ceph/ceph:v19, name=angry_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:06:37 np0005590810 podman[93494]: 2026-01-21 16:06:37.452241008 +0000 UTC m=+0.122499388 container attach e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4 (image=quay.io/ceph/ceph:v19, name=angry_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:06:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:37 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.14376 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 11:06:37 np0005590810 angry_hodgkin[93509]: 
Jan 21 11:06:37 np0005590810 angry_hodgkin[93509]: [{"container_id": "251b3c96f85a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.12%", "created": "2026-01-21T16:03:35.855910Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T16:05:57.947338Z", "memory_usage": 7790919, "ports": [], "service_name": "crash", "started": "2026-01-21T16:03:35.736541Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d9745984-fea8-5195-8ec5-61f685b5c785@crash.compute-0", "version": "19.2.3"}, {"container_id": "2fd3d3e889fb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.48%", "created": "2026-01-21T16:04:26.282194Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-21T16:05:57.971854Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2026-01-21T16:04:26.188813Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d9745984-fea8-5195-8ec5-61f685b5c785@crash.compute-1", "version": "19.2.3"}, {"container_id": "b5e7673311cc", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.31%", "created": "2026-01-21T16:05:43.325091Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-21T16:05:57.868440Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2026-01-21T16:05:42.947814Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d9745984-fea8-5195-8ec5-61f685b5c785@crash.compute-2", "version": "19.2.3"}, {"container_id": "299628b491cd", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "27.43%", "created": "2026-01-21T16:02:53.298840Z", "daemon_id": "compute-0.ygffhs", "daemon_name": "mgr.compute-0.ygffhs", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T16:05:57.947264Z", "memory_usage": 536975769, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-21T16:02:53.211056Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d9745984-fea8-5195-8ec5-61f685b5c785@mgr.compute-0.ygffhs", "version": "19.2.3"}, {"daemon_id": "compute-1.oewgcf", "daemon_name": "mgr.compute-1.oewgcf", "daemon_type": "mgr", "events": ["2026-01-21T16:06:08.659025Z daemon:mgr.compute-1.oewgcf [INFO] \"Deployed mgr.compute-1.oewgcf on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [8443, 8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2.kdxyxe", "daemon_name": "mgr.compute-2.kdxyxe", "daemon_type": "mgr", "events": ["2026-01-21T16:06:06.525921Z daemon:mgr.compute-2.kdxyxe [INFO] \"Deployed mgr.compute-2.kdxyxe on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [8443, 8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"container_id": "2bb730cd0dc0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "1.76%", "created": "2026-01-21T16:02:48.239681Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T16:05:57.947149Z", "memory_request": 2147483648, "memory_usage": 45696942, "ports": [], "service_name": "mon", "started": "2026-01-21T16:02:51.242276Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d9745984-fea8-5195-8ec5-61f685b5c785@mon.compute-0", "version": "19.2.3"}, {"daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2026-01-21T16:06:11.493919Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"container_id": "07511a9cf209", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.98%", "created": "2026-01-21T16:05:30.806762Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-21T16:05:57.868247Z", "memory_request": 2147483648, "memory_usage": 29894901, "ports": [], "service_name": "mon", "started": "2026-01-21T16:05:30.560909Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d9745984-fea8-5195-8ec5-61f685b5c785@mon.compute-2", "version": "19.2.3"}, {"daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "events": ["2026-01-21T16:06:17.182611Z daemon:node-exporter.compute-0 [INFO] \"Deployed node-exporter.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2026-01-21T16:06:21.027864Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2026-01-21T16:06:26.004691Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "9c20b5361e26", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.78%", "created": "2026-01-21T16:04:38.810691Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd"
Jan 21 11:06:37 np0005590810 systemd[1]: libpod-e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4.scope: Deactivated successfully.
Jan 21 11:06:37 np0005590810 podman[93494]: 2026-01-21 16:06:37.819992415 +0000 UTC m=+0.490250815 container died e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4 (image=quay.io/ceph/ceph:v19, name=angry_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 11:06:37 np0005590810 systemd[1]: var-lib-containers-storage-overlay-bf3725c7a2418b4a79d5972d0b272e90561e7d75a85f9a0acc88f6f6ff5873f7-merged.mount: Deactivated successfully.
Jan 21 11:06:37 np0005590810 podman[93494]: 2026-01-21 16:06:37.85643096 +0000 UTC m=+0.526689340 container remove e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4 (image=quay.io/ceph/ceph:v19, name=angry_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:37 np0005590810 systemd[1]: libpod-conmon-e8ddd0be5b50d81def73df8b4aab58203f91b9d83f7ff3f45db120c8a20739e4.scope: Deactivated successfully.
Jan 21 11:06:38 np0005590810 rsyslogd[1006]: message too long (9340) with configured size 8096, begin of message is: [{"container_id": "251b3c96f85a", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 21 11:06:38 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 21 11:06:38 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 21 11:06:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v29: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:38 np0005590810 python3[93571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:38 np0005590810 podman[93572]: 2026-01-21 16:06:38.856537546 +0000 UTC m=+0.066004709 container create 808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb (image=quay.io/ceph/ceph:v19, name=jovial_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:06:38 np0005590810 systemd[1]: Started libpod-conmon-808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb.scope.
Jan 21 11:06:38 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27855b23de18e85253f442c17da9b8f4cff88b2c0ebdf5a9a5e39c2191ae5a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27855b23de18e85253f442c17da9b8f4cff88b2c0ebdf5a9a5e39c2191ae5a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:38 np0005590810 podman[93572]: 2026-01-21 16:06:38.834510526 +0000 UTC m=+0.043977709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:38 np0005590810 podman[93572]: 2026-01-21 16:06:38.937824378 +0000 UTC m=+0.147291571 container init 808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb (image=quay.io/ceph/ceph:v19, name=jovial_mayer, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:38 np0005590810 podman[93572]: 2026-01-21 16:06:38.943736557 +0000 UTC m=+0.153203710 container start 808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb (image=quay.io/ceph/ceph:v19, name=jovial_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:38 np0005590810 podman[93572]: 2026-01-21 16:06:38.947534304 +0000 UTC m=+0.157001457 container attach 808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb (image=quay.io/ceph/ceph:v19, name=jovial_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 11:06:39 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Jan 21 11:06:39 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Jan 21 11:06:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 11:06:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4149554792' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 11:06:39 np0005590810 jovial_mayer[93587]: 
Jan 21 11:06:39 np0005590810 jovial_mayer[93587]: {"fsid":"d9745984-fea8-5195-8ec5-61f685b5c785","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":22,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":2,"osd_up_since":1769011487,"num_in_osds":3,"osd_in_since":1769011587,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194}],"num_pgs":194,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":56373248,"bytes_avail":42884911104,"bytes_total":42941284352},"fsmap":{"epoch":2,"btime":"2026-01-21T16:05:57:396348+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":2,"modified":"2026-01-21T16:04:17.975788+0000","services":{}},"progress_events":{}}
Jan 21 11:06:39 np0005590810 systemd[1]: libpod-808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb.scope: Deactivated successfully.
Jan 21 11:06:39 np0005590810 podman[93572]: 2026-01-21 16:06:39.375439844 +0000 UTC m=+0.584906987 container died 808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb (image=quay.io/ceph/ceph:v19, name=jovial_mayer, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:06:39 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a27855b23de18e85253f442c17da9b8f4cff88b2c0ebdf5a9a5e39c2191ae5a2-merged.mount: Deactivated successfully.
Jan 21 11:06:39 np0005590810 podman[93572]: 2026-01-21 16:06:39.419065759 +0000 UTC m=+0.628532902 container remove 808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb (image=quay.io/ceph/ceph:v19, name=jovial_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:06:39 np0005590810 systemd[1]: libpod-conmon-808d6fe7def022d7eed3e7979f59f48e843d408284ba5e32f5cfe0f94c5252eb.scope: Deactivated successfully.
Jan 21 11:06:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:06:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:06:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v30: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:40 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 21 11:06:40 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 21 11:06:40 np0005590810 python3[93650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:40 np0005590810 podman[93651]: 2026-01-21 16:06:40.374879728 +0000 UTC m=+0.036074604 container create 2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b (image=quay.io/ceph/ceph:v19, name=romantic_hopper, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 21 11:06:40 np0005590810 systemd[1]: Started libpod-conmon-2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b.scope.
Jan 21 11:06:40 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da513878c1bd895dde3c1fdbaa84bf268da964181208ef724ecf8bfa51328177/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da513878c1bd895dde3c1fdbaa84bf268da964181208ef724ecf8bfa51328177/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:40 np0005590810 podman[93651]: 2026-01-21 16:06:40.439556421 +0000 UTC m=+0.100751307 container init 2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b (image=quay.io/ceph/ceph:v19, name=romantic_hopper, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:06:40 np0005590810 podman[93651]: 2026-01-21 16:06:40.44485686 +0000 UTC m=+0.106051726 container start 2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b (image=quay.io/ceph/ceph:v19, name=romantic_hopper, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:40 np0005590810 podman[93651]: 2026-01-21 16:06:40.447833409 +0000 UTC m=+0.109028305 container attach 2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b (image=quay.io/ceph/ceph:v19, name=romantic_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 21 11:06:40 np0005590810 podman[93651]: 2026-01-21 16:06:40.36035537 +0000 UTC m=+0.021550266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 11:06:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287834691' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 11:06:40 np0005590810 romantic_hopper[93666]: 
Jan 21 11:06:40 np0005590810 romantic_hopper[93666]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard//server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.ygffhs/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502923980","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""}]
Jan 21 11:06:40 np0005590810 systemd[1]: libpod-2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b.scope: Deactivated successfully.
Jan 21 11:06:40 np0005590810 podman[93651]: 2026-01-21 16:06:40.801631038 +0000 UTC m=+0.462825914 container died 2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b (image=quay.io/ceph/ceph:v19, name=romantic_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 11:06:40 np0005590810 systemd[1]: var-lib-containers-storage-overlay-da513878c1bd895dde3c1fdbaa84bf268da964181208ef724ecf8bfa51328177-merged.mount: Deactivated successfully.
Jan 21 11:06:40 np0005590810 podman[93651]: 2026-01-21 16:06:40.841755256 +0000 UTC m=+0.502950142 container remove 2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b (image=quay.io/ceph/ceph:v19, name=romantic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:40 np0005590810 systemd[1]: libpod-conmon-2583c159ac36e119fefa15e89ca61934fe190286944aa1637ab16f6c669c828b.scope: Deactivated successfully.
Jan 21 11:06:41 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Jan 21 11:06:41 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Jan 21 11:06:41 np0005590810 python3[93726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:41 np0005590810 podman[93727]: 2026-01-21 16:06:41.923102353 +0000 UTC m=+0.062715698 container create be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257 (image=quay.io/ceph/ceph:v19, name=pedantic_wu, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 21 11:06:41 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:41 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:41 np0005590810 systemd[1]: Started libpod-conmon-be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257.scope.
Jan 21 11:06:41 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6fa9f641b25450820de071eedfee6165f5f58efec0d7b9da2d9f38183ed1e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6fa9f641b25450820de071eedfee6165f5f58efec0d7b9da2d9f38183ed1e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:41 np0005590810 podman[93727]: 2026-01-21 16:06:41.888612314 +0000 UTC m=+0.028225659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:41 np0005590810 podman[93727]: 2026-01-21 16:06:41.993456957 +0000 UTC m=+0.133070332 container init be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257 (image=quay.io/ceph/ceph:v19, name=pedantic_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:06:42 np0005590810 podman[93727]: 2026-01-21 16:06:42.000466253 +0000 UTC m=+0.140079608 container start be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257 (image=quay.io/ceph/ceph:v19, name=pedantic_wu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:06:42 np0005590810 podman[93727]: 2026-01-21 16:06:42.004134746 +0000 UTC m=+0.143748091 container attach be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257 (image=quay.io/ceph/ceph:v19, name=pedantic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 11:06:42 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Jan 21 11:06:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v31: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:42 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Jan 21 11:06:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 21 11:06:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1930087554' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 21 11:06:42 np0005590810 pedantic_wu[93742]: mimic
Jan 21 11:06:42 np0005590810 systemd[1]: libpod-be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257.scope: Deactivated successfully.
Jan 21 11:06:42 np0005590810 podman[93727]: 2026-01-21 16:06:42.368562551 +0000 UTC m=+0.508175896 container died be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257 (image=quay.io/ceph/ceph:v19, name=pedantic_wu, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay-2a6fa9f641b25450820de071eedfee6165f5f58efec0d7b9da2d9f38183ed1e5-merged.mount: Deactivated successfully.
Jan 21 11:06:42 np0005590810 podman[93727]: 2026-01-21 16:06:42.405004066 +0000 UTC m=+0.544617411 container remove be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257 (image=quay.io/ceph/ceph:v19, name=pedantic_wu, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:06:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 21 11:06:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 21 11:06:42 np0005590810 systemd[1]: libpod-conmon-be729746e4002c75526994e9ae8c256b8cb639272617968558e6b274dc8ec257.scope: Deactivated successfully.
Jan 21 11:06:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:42 np0005590810 ceph-mon[74380]: from='osd.2 [v2:192.168.122.102:6800/1632882551,v1:192.168.122.102:6801/1632882551]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 21 11:06:42 np0005590810 ceph-mon[74380]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 21 11:06:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e45 e45: 3 total, 2 up, 3 in
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 2 up, 3 in
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:43 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e45 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Jan 21 11:06:43 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 21 11:06:43 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 21 11:06:43 np0005590810 python3[93803]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:06:43 np0005590810 podman[93804]: 2026-01-21 16:06:43.367187519 +0000 UTC m=+0.040045197 container create cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd (image=quay.io/ceph/ceph:v19, name=clever_saha, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:06:43 np0005590810 systemd[1]: Started libpod-conmon-cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd.scope.
Jan 21 11:06:43 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:43 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3231838e92efbb947cf711558d960ed91ffef9592ed76487b492d873fbadf0b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:43 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3231838e92efbb947cf711558d960ed91ffef9592ed76487b492d873fbadf0b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:43 np0005590810 podman[93804]: 2026-01-21 16:06:43.439199459 +0000 UTC m=+0.112057147 container init cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd (image=quay.io/ceph/ceph:v19, name=clever_saha, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:06:43 np0005590810 podman[93804]: 2026-01-21 16:06:43.444483166 +0000 UTC m=+0.117340824 container start cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd (image=quay.io/ceph/ceph:v19, name=clever_saha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:43 np0005590810 podman[93804]: 2026-01-21 16:06:43.349890398 +0000 UTC m=+0.022748076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:06:43 np0005590810 podman[93804]: 2026-01-21 16:06:43.447891301 +0000 UTC m=+0.120748979 container attach cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd (image=quay.io/ceph/ceph:v19, name=clever_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 21 11:06:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146747940' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 21 11:06:43 np0005590810 clever_saha[93819]: 
Jan 21 11:06:43 np0005590810 systemd[1]: libpod-cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd.scope: Deactivated successfully.
Jan 21 11:06:43 np0005590810 clever_saha[93819]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":2},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":8}}
Jan 21 11:06:43 np0005590810 podman[93804]: 2026-01-21 16:06:43.886839131 +0000 UTC m=+0.559696789 container died cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd (image=quay.io/ceph/ceph:v19, name=clever_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:06:43 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b3231838e92efbb947cf711558d960ed91ffef9592ed76487b492d873fbadf0b-merged.mount: Deactivated successfully.
Jan 21 11:06:43 np0005590810 podman[93804]: 2026-01-21 16:06:43.940514085 +0000 UTC m=+0.613371783 container remove cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd (image=quay.io/ceph/ceph:v19, name=clever_saha, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:43 np0005590810 systemd[1]: libpod-conmon-cd38ab2a92a03ac3bb1450803a80d6b297a6792372de06fb41bc8ee3f106e1dd.scope: Deactivated successfully.
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 21 11:06:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v33: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:44 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 21 11:06:44 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:44 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev eddd11a7-c9c2-4ab8-9740-9c13e3a2930d (Updating rgw.rgw deployment (+3 -> 3))
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ggubtc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ggubtc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e46 e46: 3 total, 2 up, 3 in
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: from='osd.2 [v2:192.168.122.102:6800/1632882551,v1:192.168.122.102:6801/1632882551]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 2 up, 3 in
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:44 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ggubtc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:06:44 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1632882551; not ready for session (expect reconnect)
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:44 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:44 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.ggubtc on compute-2
Jan 21 11:06:44 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.ggubtc on compute-2
Jan 21 11:06:45 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 21 11:06:45 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 21 11:06:45 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:45 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ggubtc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:06:45 np0005590810 ceph-mon[74380]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 21 11:06:45 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ggubtc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:06:45 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:45 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1632882551; not ready for session (expect reconnect)
Jan 21 11:06:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:45 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 21 11:06:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v35: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.184516907s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.017837524s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.184516907s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.017837524s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[6.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=14.326052666s) [] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 139.159561157s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188754082s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.022445679s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[6.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=14.326052666s) [] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.159561157s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188754082s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022445679s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188573837s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.022583008s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.574720383s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.408737183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188573837s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022583008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[3.8( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=46 pruub=13.188488007s) [] r=-1 lpr=46 pi=[30,46)/1 crt=0'0 mlcod 0'0 active pruub 138.022583008s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.574720383s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.408737183s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[3.8( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=46 pruub=13.188488007s) [] r=-1 lpr=46 pi=[30,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022583008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188615799s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.022827148s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=46 pruub=13.188897133s) [] r=-1 lpr=46 pi=[30,46)/1 crt=0'0 mlcod 0'0 active pruub 138.022430420s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188615799s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022827148s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=46 pruub=13.188897133s) [] r=-1 lpr=46 pi=[30,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022430420s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188524246s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.022949219s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188524246s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022949219s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188459396s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.022964478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188459396s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022964478s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=14.329373360s) [] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 139.163650513s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=14.329373360s) [] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.163650513s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.0( empty local-lis/les=31/33 n=0 ec=17/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188346863s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.023056030s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.0( empty local-lis/les=31/33 n=0 ec=17/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188346863s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023056030s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.577419281s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412261963s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.577419281s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412261963s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.577242851s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412292480s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[3.0( empty local-lis/les=30/33 n=0 ec=15/15 lis/c=30/30 les/c/f=33/33/0 sis=46 pruub=13.188159943s) [] r=-1 lpr=46 pi=[30,46)/1 crt=0'0 mlcod 0'0 active pruub 138.023193359s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188433647s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.023498535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.577242851s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412292480s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188433647s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023498535s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[3.0( empty local-lis/les=30/33 n=0 ec=15/15 lis/c=30/30 les/c/f=33/33/0 sis=46 pruub=13.188159943s) [] r=-1 lpr=46 pi=[30,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023193359s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.577056885s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412261963s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188836098s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.024093628s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188836098s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024093628s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[7.a( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.577007294s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412307739s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[7.a( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.577007294s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412307739s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188672066s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.024078369s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188672066s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024078369s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[7.14( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576834679s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412322998s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[7.14( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576834679s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412322998s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576821327s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412353516s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576821327s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412353516s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.577056885s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412261963s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576985359s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412658691s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576985359s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188476562s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.024276733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576877594s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412658691s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.190914154s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.026748657s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188476562s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024276733s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576877594s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[5.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.190914154s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.026748657s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188282967s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active pruub 138.024291992s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=46 pruub=13.188282967s) [] r=-1 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024291992s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[7.1d( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576580048s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 139.412658691s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:46 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 46 pg[7.1d( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=46 pruub=14.576580048s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:46 np0005590810 ceph-mon[74380]: Deploying daemon rgw.rgw.compute-2.ggubtc on compute-2
Jan 21 11:06:46 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1632882551; not ready for session (expect reconnect)
Jan 21 11:06:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:46 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:47 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 21 11:06:47 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.gvknpl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.gvknpl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.gvknpl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:47 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.gvknpl on compute-1
Jan 21 11:06:47 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.gvknpl on compute-1
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.gvknpl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.gvknpl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:47 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1632882551; not ready for session (expect reconnect)
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:47 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v36: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:48 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Jan 21 11:06:48 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e47 e47: 3 total, 2 up, 3 in
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 2 up, 3 in
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:48 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:48 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: Deploying daemon rgw.rgw.compute-1.gvknpl on compute-1
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.102:0/2373681571' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 21 11:06:48 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1632882551; not ready for session (expect reconnect)
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:48 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:49 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 21 11:06:49 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.erxmtp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.erxmtp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e48 e48: 3 total, 2 up, 3 in
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 2 up, 3 in
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:49 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:49 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.erxmtp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:49 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.erxmtp on compute-0
Jan 21 11:06:49 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.erxmtp on compute-0
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.erxmtp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.erxmtp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:49 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1632882551; not ready for session (expect reconnect)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:49 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:50 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 21 11:06:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v39: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 21 11:06:50 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 21 11:06:50 np0005590810 podman[93954]: 2026-01-21 16:06:50.231186701 +0000 UTC m=+0.042137427 container create cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:06:50 np0005590810 systemd[1]: Started libpod-conmon-cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7.scope.
Jan 21 11:06:50 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:50 np0005590810 podman[93954]: 2026-01-21 16:06:50.304313799 +0000 UTC m=+0.115264535 container init cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:06:50 np0005590810 podman[93954]: 2026-01-21 16:06:50.214203441 +0000 UTC m=+0.025154187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:50 np0005590810 podman[93954]: 2026-01-21 16:06:50.310314341 +0000 UTC m=+0.121265067 container start cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:06:50 np0005590810 priceless_fermat[93971]: 167 167
Jan 21 11:06:50 np0005590810 podman[93954]: 2026-01-21 16:06:50.314371556 +0000 UTC m=+0.125322282 container attach cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 11:06:50 np0005590810 systemd[1]: libpod-cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7.scope: Deactivated successfully.
Jan 21 11:06:50 np0005590810 podman[93954]: 2026-01-21 16:06:50.314850993 +0000 UTC m=+0.125801719 container died cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:06:50 np0005590810 systemd[1]: var-lib-containers-storage-overlay-bb5b7b7e895fba0ca97759be2c3da8b6b4747b7fa15e2f056bc6cb8177d9115c-merged.mount: Deactivated successfully.
Jan 21 11:06:50 np0005590810 podman[93954]: 2026-01-21 16:06:50.347634724 +0000 UTC m=+0.158585450 container remove cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:50 np0005590810 systemd[1]: libpod-conmon-cc4d548bba863107fd8e35958e87054f7b06ab2378b039d349257c55f01a09a7.scope: Deactivated successfully.
Jan 21 11:06:50 np0005590810 systemd[1]: Reloading.
Jan 21 11:06:50 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:06:50 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:06:50 np0005590810 systemd[1]: Reloading.
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 21 11:06:50 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:06:50 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e49 e49: 3 total, 2 up, 3 in
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 2 up, 3 in
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:50 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 11:06:50 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1632882551; not ready for session (expect reconnect)
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:50 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: Deploying daemon rgw.rgw.compute-0.erxmtp on compute-0
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.101:0/440805425' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.102:0/1926089021' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 11:06:50 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 11:06:50 np0005590810 systemd[1]: Starting Ceph rgw.rgw.compute-0.erxmtp for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:06:51 np0005590810 podman[94109]: 2026-01-21 16:06:51.091007134 +0000 UTC m=+0.033873179 container create eeffc541fbbf79a46ad6f683c4608be31935b9f83eceff65b6fc97f17f4f6e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-rgw-rgw-compute-0-erxmtp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44902a533828742a021edba29e93138037cc9985350ea35ff72d7d31a30a541c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44902a533828742a021edba29e93138037cc9985350ea35ff72d7d31a30a541c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44902a533828742a021edba29e93138037cc9985350ea35ff72d7d31a30a541c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44902a533828742a021edba29e93138037cc9985350ea35ff72d7d31a30a541c/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.erxmtp supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:51 np0005590810 podman[94109]: 2026-01-21 16:06:51.15040112 +0000 UTC m=+0.093267185 container init eeffc541fbbf79a46ad6f683c4608be31935b9f83eceff65b6fc97f17f4f6e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-rgw-rgw-compute-0-erxmtp, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:06:51 np0005590810 podman[94109]: 2026-01-21 16:06:51.155138999 +0000 UTC m=+0.098005044 container start eeffc541fbbf79a46ad6f683c4608be31935b9f83eceff65b6fc97f17f4f6e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-rgw-rgw-compute-0-erxmtp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:06:51 np0005590810 bash[94109]: eeffc541fbbf79a46ad6f683c4608be31935b9f83eceff65b6fc97f17f4f6e70
Jan 21 11:06:51 np0005590810 podman[94109]: 2026-01-21 16:06:51.074335274 +0000 UTC m=+0.017201349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:51 np0005590810 systemd[1]: Started Ceph rgw.rgw.compute-0.erxmtp for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:06:51 np0005590810 radosgw[94128]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 21 11:06:51 np0005590810 radosgw[94128]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Jan 21 11:06:51 np0005590810 radosgw[94128]: framework: beast
Jan 21 11:06:51 np0005590810 radosgw[94128]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 21 11:06:51 np0005590810 radosgw[94128]: init_numa not setting numa affinity
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:06:51 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts
Jan 21 11:06:51 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev eddd11a7-c9c2-4ab8-9740-9c13e3a2930d (Updating rgw.rgw deployment (+3 -> 3))
Jan 21 11:06:51 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event eddd11a7-c9c2-4ab8-9740-9c13e3a2930d (Updating rgw.rgw deployment (+3 -> 3)) in 7 seconds
Jan 21 11:06:51 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:51 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev fea3b4b1-32e4-4784-8d67-ac6db3574849 (Updating mds.cephfs deployment (+3 -> 3))
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dfgygz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dfgygz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dfgygz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:51 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.dfgygz on compute-2
Jan 21 11:06:51 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.dfgygz on compute-2
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1632882551,v1:192.168.122.102:6801/1632882551] boot
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dfgygz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dfgygz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: Deploying daemon mds.cephfs.compute-2.dfgygz on compute-2
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 11:06:51 np0005590810 ceph-mon[74380]: osd.2 [v2:192.168.122.102:6800/1632882551,v1:192.168.122.102:6801/1632882551] boot
Jan 21 11:06:52 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 12 completed events
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:52 np0005590810 ceph-mgr[74671]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 21 11:06:52 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 21 11:06:52 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 21 11:06:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v42: 196 pgs: 29 peering, 2 unknown, 165 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 21 11:06:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 11:06:53 np0005590810 ceph-mon[74380]: OSD bench result of 2816.216695 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 11:06:53 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:53 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 11:06:53 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.101:0/440805425' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 11:06:53 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 11:06:53 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.102:0/1926089021' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 11:06:53 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [0] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[6.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=7.611740112s) [2] r=-1 lpr=50 pi=[33,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.159561157s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[6.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=7.611704826s) [2] r=-1 lpr=50 pi=[33,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.159561157s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[3.1b( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=50 pruub=6.474400043s) [2] r=-1 lpr=50 pi=[30,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022430420s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[4.19( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.469699383s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.017837524s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.858795643s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.408737183s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.858779430s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.408737183s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[4.1d( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472528934s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022583008s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[4.1d( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472515583s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022583008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[3.8( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=50 pruub=6.472212791s) [2] r=-1 lpr=50 pi=[30,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022583008s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[4.19( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.467489719s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.017837524s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[3.8( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=50 pruub=6.472188473s) [2] r=-1 lpr=50 pi=[30,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022583008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[3.1b( empty local-lis/les=30/33 n=0 ec=30/15 lis/c=30/30 les/c/f=33/33/0 sis=50 pruub=6.472056866s) [2] r=-1 lpr=50 pi=[30,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022430420s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[4.3( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472324371s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022827148s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[4.3( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472304821s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022827148s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[6.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=7.612957478s) [2] r=-1 lpr=50 pi=[33,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.163650513s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[6.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=7.612917423s) [2] r=-1 lpr=50 pi=[33,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.163650513s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471717834s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022445679s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[4.1c( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471682549s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022445679s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[4.6( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472100735s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022949219s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[4.6( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472072124s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022949219s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472044945s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022964478s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[4.2( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472025394s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.022964478s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[5.0( empty local-lis/les=31/33 n=0 ec=17/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471971035s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023056030s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[5.0( empty local-lis/les=31/33 n=0 ec=17/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471924782s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023056030s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[3.0( empty local-lis/les=30/33 n=0 ec=15/15 lis/c=30/30 les/c/f=33/33/0 sis=50 pruub=6.471700191s) [2] r=-1 lpr=50 pi=[30,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023193359s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[3.0( empty local-lis/les=30/33 n=0 ec=15/15 lis/c=30/30 les/c/f=33/33/0 sis=50 pruub=6.471674442s) [2] r=-1 lpr=50 pi=[30,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023193359s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860649586s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412261963s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860616207s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412261963s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[5.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471789360s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023498535s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860500813s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412292480s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860463142s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412261963s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[5.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471718311s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.023498535s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860430717s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412261963s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860410213s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412292480s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[5.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.472025394s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024093628s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[5.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471990585s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024093628s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[5.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471943378s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024078369s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[5.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471919537s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024078369s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[7.14( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860094547s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412322998s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860079288s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412353516s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[7.14( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860056400s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412322998s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860301495s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860281467s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471827507s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024291992s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[4.14( empty local-lis/les=31/33 n=0 ec=31/16 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471800327s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024291992s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860107899s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[5.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471706390s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024276733s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[7.a( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.859691620s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412307739s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860044479s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412353516s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[5.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.471623898s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.024276733s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.860014915s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[5.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.473976612s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.026748657s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[5.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=50 pruub=6.473950863s) [2] r=-1 lpr=50 pi=[31,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.026748657s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[7.a( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.859395027s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412307739s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 50 pg[7.1d( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.859670639s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 51 pg[7.1d( empty local-lis/les=36/37 n=0 ec=33/19 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=7.859456539s) [2] r=-1 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 139.412658691s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:06:53 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Jan 21 11:06:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 21 11:06:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v44: 197 pgs: 1 unknown, 29 peering, 167 active+clean; 450 KiB data, 481 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.6 KiB/s wr, 8 op/s
Jan 21 11:06:54 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Jan 21 11:06:54 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 21 11:06:54 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [0] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:06:54 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hjphzb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hjphzb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hjphzb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:54 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.hjphzb on compute-0
Jan 21 11:06:54 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.hjphzb on compute-0
Jan 21 11:06:55 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 21 11:06:55 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 21 11:06:55 np0005590810 podman[94818]: 2026-01-21 16:06:55.34197059 +0000 UTC m=+0.042918743 container create dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 21 11:06:55 np0005590810 systemd[1]: Started libpod-conmon-dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a.scope.
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 11:06:55 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:06:55 np0005590810 podman[94818]: 2026-01-21 16:06:55.323836521 +0000 UTC m=+0.024784694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:55 np0005590810 podman[94818]: 2026-01-21 16:06:55.425585729 +0000 UTC m=+0.126533902 container init dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_raman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 11:06:55 np0005590810 podman[94818]: 2026-01-21 16:06:55.432680668 +0000 UTC m=+0.133628821 container start dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:06:55 np0005590810 podman[94818]: 2026-01-21 16:06:55.435528524 +0000 UTC m=+0.136476677 container attach dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_raman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:55 np0005590810 hardcore_raman[94834]: 167 167
Jan 21 11:06:55 np0005590810 systemd[1]: libpod-dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a.scope: Deactivated successfully.
Jan 21 11:06:55 np0005590810 podman[94818]: 2026-01-21 16:06:55.438052239 +0000 UTC m=+0.139000392 container died dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_raman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:06:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-700a6e4c6a7c7996304c809699070a2164dc04900472f0e594b183d84551f708-merged.mount: Deactivated successfully.
Jan 21 11:06:55 np0005590810 podman[94818]: 2026-01-21 16:06:55.473716687 +0000 UTC m=+0.174664840 container remove dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_raman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:06:55 np0005590810 systemd[1]: libpod-conmon-dee252b282a89cfc245e467470bd59521a4381427f77eccded0c482d5306743a.scope: Deactivated successfully.
Jan 21 11:06:55 np0005590810 systemd[1]: Reloading.
Jan 21 11:06:55 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:06:55 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hjphzb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.hjphzb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.102:0/1926089021' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.101:0/440805425' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e3 new map
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-21T16:06:55:593395+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:05:57.396255+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.dfgygz{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] up:boot
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] as mds.0
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.dfgygz assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e3 all = 0
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e4 new map
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-21T16:06:55:618284+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:06:55.618277+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.dfgygz{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dfgygz=up:creating}
Jan 21 11:06:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.dfgygz is now active in filesystem cephfs as rank 0
Jan 21 11:06:55 np0005590810 systemd[1]: Reloading.
Jan 21 11:06:55 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:06:55 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:06:56 np0005590810 systemd[1]: Starting Ceph mds.cephfs.compute-0.hjphzb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:06:56
Jan 21 11:06:56 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v47: 198 pgs: 2 unknown, 29 peering, 167 active+clean; 450 KiB data, 481 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.6 KiB/s wr, 8 op/s
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Some PGs (0.010101) are unknown; try again later
Jan 21 11:06:56 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 32)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:06:56 np0005590810 podman[94978]: 2026-01-21 16:06:56.25855841 +0000 UTC m=+0.023504631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 21 11:06:56 np0005590810 podman[94978]: 2026-01-21 16:06:56.473084398 +0000 UTC m=+0.238030599 container create 799e0671bf677ab7a747c65514fc30553538c713ba1b80555ff003c2d51749dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mds-cephfs-compute-0-hjphzb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev af713844-7326-47b9-afd2-aa1d5c24833f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 11:06:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2288cd6679df2a8d10a51f63682a137d3f096bb889166da9abeb4e827bbfb7f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2288cd6679df2a8d10a51f63682a137d3f096bb889166da9abeb4e827bbfb7f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2288cd6679df2a8d10a51f63682a137d3f096bb889166da9abeb4e827bbfb7f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2288cd6679df2a8d10a51f63682a137d3f096bb889166da9abeb4e827bbfb7f2/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.hjphzb supports timestamps until 2038 (0x7fffffff)
Jan 21 11:06:56 np0005590810 podman[94978]: 2026-01-21 16:06:56.570114609 +0000 UTC m=+0.335060820 container init 799e0671bf677ab7a747c65514fc30553538c713ba1b80555ff003c2d51749dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mds-cephfs-compute-0-hjphzb, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 21 11:06:56 np0005590810 podman[94978]: 2026-01-21 16:06:56.575413157 +0000 UTC m=+0.340359358 container start 799e0671bf677ab7a747c65514fc30553538c713ba1b80555ff003c2d51749dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mds-cephfs-compute-0-hjphzb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:06:56 np0005590810 bash[94978]: 799e0671bf677ab7a747c65514fc30553538c713ba1b80555ff003c2d51749dd
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 11:06:56 np0005590810 systemd[1]: Started Ceph mds.cephfs.compute-0.hjphzb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:06:56 np0005590810 ceph-mds[94997]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 11:06:56 np0005590810 ceph-mds[94997]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Jan 21 11:06:56 np0005590810 ceph-mds[94997]: main not setting numa affinity
Jan 21 11:06:56 np0005590810 ceph-mds[94997]: pidfile_write: ignore empty --pid-file
Jan 21 11:06:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mds-cephfs-compute-0-hjphzb[94993]: starting mds.cephfs.compute-0.hjphzb at 
Jan 21 11:06:56 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb Updating MDS map to version 4 from mon.0
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: Deploying daemon mds.cephfs.compute-0.hjphzb on compute-0
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: daemon mds.cephfs.compute-2.dfgygz assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: daemon mds.cephfs.compute-2.dfgygz is now active in filesystem cephfs as rank 0
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.101:0/440805425' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.102:0/1926089021' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e5 new map
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-21T16:06:56:628769+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:06:56.628767+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24157 members: 24157#012[mds.cephfs.compute-2.dfgygz{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.hjphzb{-1:14436} state up:standby seq 1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:06:56 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb Updating MDS map to version 5 from mon.0
Jan 21 11:06:56 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb Monitors have assigned me to become a standby
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] up:active
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] up:boot
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dfgygz=up:active} 1 up:standby
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.hjphzb"} v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.hjphzb"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e5 all = 0
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e6 new map
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2026-01-21T16:06:56:671007+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:06:56.628767+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24157 members: 24157#012[mds.cephfs.compute-2.dfgygz{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.hjphzb{-1:14436} state up:standby seq 1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dfgygz=up:active} 1 up:standby
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.akvqho", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.akvqho", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.akvqho", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.akvqho on compute-1
Jan 21 11:06:56 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.akvqho on compute-1
Jan 21 11:06:57 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event f3baea65-5aa6-4869-a246-34acad2b5c9a (Global Recovery Event) in 5 seconds
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 21 11:06:57 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev e1e3f722-e693-4a1a-8f57-baa86c0861ef (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.akvqho", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.akvqho", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='client.? 192.168.122.100:0/3831644430' entity='client.rgw.rgw.compute-0.erxmtp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-1.gvknpl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='client.? ' entity='client.rgw.rgw.compute-2.ggubtc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 11:06:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:58 np0005590810 radosgw[94128]: v1 topic migration: starting v1 topic migration..
Jan 21 11:06:58 np0005590810 radosgw[94128]: LDAP not started since no server URIs were provided in the configuration.
Jan 21 11:06:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-rgw-rgw-compute-0-erxmtp[94124]: 2026-01-21T16:06:58.173+0000 7f1a4852f980 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 21 11:06:58 np0005590810 radosgw[94128]: v1 topic migration: finished v1 topic migration
Jan 21 11:06:58 np0005590810 radosgw[94128]: framework: beast
Jan 21 11:06:58 np0005590810 radosgw[94128]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 21 11:06:58 np0005590810 radosgw[94128]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 21 11:06:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v50: 198 pgs: 198 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 4.0 KiB/s wr, 14 op/s
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:06:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 21 11:06:58 np0005590810 radosgw[94128]: starting handler: beast
Jan 21 11:06:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 21 11:06:58 np0005590810 radosgw[94128]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 11:06:58 np0005590810 radosgw[94128]: mgrc service_daemon_register rgw.14430 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.erxmtp,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=7afcc13c-bd8c-49e4-9c06-4ce25f96de08,zone_name=default,zonegroup_id=45605252-ed10-4d36-9d15-c6b8ed8729aa,zonegroup_name=default}
Jan 21 11:06:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 21 11:06:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 21 11:06:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 21 11:06:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 21 11:06:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 21 11:06:58 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 680ba5d6-a888-4fda-94a1-17771920c0d1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 21 11:06:58 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: Deploying daemon mds.cephfs.compute-1.akvqho on compute-1
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 21 11:06:59 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev bd5119ed-93e0-4182-92f5-702090497b49 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 56 pg[9.0( v 48'9 (0'0,48'9] local-lis/les=47/48 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=14.216143608s) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 48'8 mlcod 48'8 active pruub 152.065002441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 56 pg[8.0( v 42'1 (0'0,42'1] local-lis/les=41/42 n=1 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=14.544856071s) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 152.393783569s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.0( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=14.216143608s) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 48'8 mlcod 0'0 unknown pruub 152.065002441s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.0( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=14.544856071s) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown pruub 152.393783569s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.b( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.e( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.c( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.18( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.a( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.9( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.17( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.1a( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.11( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.1f( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.1b( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.19( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.1c( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.14( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.7( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.4( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.d( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.16( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.8( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.1( v 48'9 (0'0,48'9] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.13( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.2( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.15( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.3( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.10( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.5( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.6( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.12( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.1e( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.1d( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[9.f( v 48'9 lc 0'0 (0'0,48'9] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.5( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.11( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.f( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.12( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.4( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.18( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.1b( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.16( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.1c( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.1a( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.14( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.7( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.17( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.1( v 42'1 (0'0,42'1] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.c( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.3( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.e( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.b( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.1f( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.19( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.d( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.1d( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.9( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.6( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.1e( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.2( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.a( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.8( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.13( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.10( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 57 pg[8.15( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:59 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev fea3b4b1-32e4-4784-8d67-ac6db3574849 (Updating mds.cephfs deployment (+3 -> 3))
Jan 21 11:06:59 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event fea3b4b1-32e4-4784-8d67-ac6db3574849 (Updating mds.cephfs deployment (+3 -> 3)) in 8 seconds
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:59 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev a021559a-80da-4cbd-a84a-9618c61157c1 (Updating nfs.cephfs deployment (+3 -> 3))
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:06:59 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.cqdsgn
Jan 21 11:06:59 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.cqdsgn
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 21 11:06:59 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 21 11:06:59 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:06:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: Cluster is now healthy
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: Creating key for client.nfs.cephfs.0.0.compute-1.cqdsgn
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 21 11:07:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v53: 260 pgs: 62 unknown, 198 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 4.0 KiB/s wr, 14 op/s
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 21 11:07:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 8e4b894e-cb18-4fe9-81e2-00b56d5ce594 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev af713844-7326-47b9-afd2-aa1d5c24833f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event af713844-7326-47b9-afd2-aa1d5c24833f (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev e1e3f722-e693-4a1a-8f57-baa86c0861ef (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event e1e3f722-e693-4a1a-8f57-baa86c0861ef (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 680ba5d6-a888-4fda-94a1-17771920c0d1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 680ba5d6-a888-4fda-94a1-17771920c0d1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 3 seconds
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev bd5119ed-93e0-4182-92f5-702090497b49 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event bd5119ed-93e0-4182-92f5-702090497b49 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 2 seconds
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 8e4b894e-cb18-4fe9-81e2-00b56d5ce594 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 8e4b894e-cb18-4fe9-81e2-00b56d5ce594 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e7 new map
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[11.0( v 52'48 (0'0,52'48] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=9.281374931s) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 52'47 mlcod 52'47 active pruub 148.709625244s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2026-01-21T16:07:00:788207+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:06:59.666414+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24157 members: 24157#012[mds.cephfs.compute-2.dfgygz{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.hjphzb{-1:14436} state up:standby seq 1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.akvqho{-1:34133} state up:standby seq 1 addr [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[11.0( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=9.281374931s) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 52'47 mlcod 0'0 unknown pruub 148.709625244s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] up:boot
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] up:active
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dfgygz=up:active} 2 up:standby
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.15( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.akvqho"} v 0)
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.akvqho"}]: dispatch
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e7 all = 0
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.15( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.17( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.16( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.14( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.16( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.11( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.17( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.10( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.14( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.11( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.3( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.2( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.3( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.10( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.2( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.e( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.f( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.9( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.8( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.9( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.b( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.f( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.e( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.a( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.8( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.d( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.d( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.c( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.b( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.a( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.c( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.1( v 42'1 (0'0,42'1] local-lis/les=56/58 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.0( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 48'8 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.1( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.7( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.0( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.6( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.7( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.5( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.5( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.4( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.6( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.1a( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.1b( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.1b( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.1a( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.18( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.19( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.18( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.19( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.1e( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.1f( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.4( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.1f( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.1e( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.1d( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.1c( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.13( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.12( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.1d( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.1c( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[8.12( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=42'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 58 pg[9.13( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=48'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.cqdsgn-rgw
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.cqdsgn-rgw
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.cqdsgn's ganesha conf is defaulting to empty
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.cqdsgn's ganesha conf is defaulting to empty
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.cqdsgn on compute-1
Jan 21 11:07:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.cqdsgn on compute-1
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 21 11:07:01 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 21 11:07:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.17( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.15( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.14( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.13( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.16( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.12( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1( v 52'48 (0'0,52'48] local-lis/les=51/52 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.c( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.b( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.a( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.9( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.d( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.e( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.f( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.2( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.3( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.4( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.8( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.5( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.6( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.7( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.18( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.19( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1a( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1b( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1d( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1e( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1c( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1f( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.10( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.11( v 52'48 lc 0'0 (0'0,52'48] local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e8 new map
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2026-01-21T16:07:02:124648+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:06:59.666414+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24157 members: 24157#012[mds.cephfs.compute-2.dfgygz{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.hjphzb{-1:14436} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.akvqho{-1:34133} state up:standby seq 1 addr [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.15( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.17( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb Updating MDS map to version 8 from mon.0
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] up:standby
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dfgygz=up:active} 2 up:standby
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.14( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.13( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.12( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.0( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 52'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.c( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.16( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.b( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.a( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.d( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.9( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.e( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.f( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.2( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.3( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.4( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.5( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.8( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.6( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.7( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.19( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1a( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.18( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1b( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1d( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1e( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.10( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1f( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.11( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 59 pg[11.1c( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: Creating key for client.nfs.cephfs.0.0.compute-1.cqdsgn-rgw
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cqdsgn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: Bind address in nfs.cephfs.0.0.compute-1.cqdsgn's ganesha conf is defaulting to empty
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: Deploying daemon nfs.cephfs.0.0.compute-1.cqdsgn on compute-1
Jan 21 11:07:02 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 19 completed events
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:02 np0005590810 ceph-mgr[74671]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Jan 21 11:07:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v56: 322 pgs: 124 unknown, 198 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 21 11:07:02 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 21 11:07:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:03 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 21 11:07:03 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:03 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.cbyxlf
Jan 21 11:07:03 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.cbyxlf
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 21 11:07:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 21 11:07:04 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 21 11:07:04 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:07:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 7.2 KiB/s wr, 565 op/s
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 21 11:07:04 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Jan 21 11:07:04 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 new map
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2026-01-21T16:07:04:856201+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:06:59.666414+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24157 members: 24157#012[mds.cephfs.compute-2.dfgygz{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.hjphzb{-1:14436} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.akvqho{-1:34133} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] up:standby
Jan 21 11:07:04 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dfgygz=up:active} 2 up:standby
Jan 21 11:07:05 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 21 11:07:05 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 21 11:07:05 np0005590810 ceph-mon[74380]: Creating key for client.nfs.cephfs.1.0.compute-2.cbyxlf
Jan 21 11:07:05 np0005590810 ceph-mon[74380]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 21 11:07:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 6.4 KiB/s wr, 503 op/s
Jan 21 11:07:06 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.16 deep-scrub starts
Jan 21 11:07:06 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.16 deep-scrub ok
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 79c76342-d42a-4c1b-810e-aa1a42b9e5ba (Global Recovery Event) in 5 seconds
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.cbyxlf-rgw
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.cbyxlf-rgw
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.cbyxlf's ganesha conf is defaulting to empty
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.cbyxlf's ganesha conf is defaulting to empty
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.cbyxlf on compute-2
Jan 21 11:07:07 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.cbyxlf on compute-2
Jan 21 11:07:07 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 21 11:07:07 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cbyxlf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:07:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 5.9 KiB/s wr, 370 op/s
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.17( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.592246056s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.473464966s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.17( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.592207909s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.473464966s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.16( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.597900391s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479339600s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.15( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.551198006s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.432647705s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.15( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.551157951s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.432647705s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.16( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.597858429s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479339600s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.15( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557686806s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439270020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.15( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557641029s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439270020s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.17( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557552338s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439285278s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.14( v 61'51 (0'0,61'51] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.597313881s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=59'49 lcod 61'50 mlcod 61'50 active pruub 156.479080200s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.17( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557537079s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439285278s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.14( v 61'51 (0'0,61'51] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.597279549s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=59'49 lcod 61'50 mlcod 0'0 unknown NOTIFY pruub 156.479080200s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.16( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557591438s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439392090s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.17( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557529449s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439376831s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.17( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557517052s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439376831s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.13( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.597133636s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479110718s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.11( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557443619s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439422607s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.13( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.597120285s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479110718s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.16( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557271957s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439270020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.11( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557403564s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439422607s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.16( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557250023s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439270020s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.10( v 59'2 (0'0,59'2] local-lis/les=56/58 n=1 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557342529s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=59'2 lcod 42'1 mlcod 42'1 active pruub 155.439437866s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.16( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557566643s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439392090s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.10( v 59'2 (0'0,59'2] local-lis/les=56/58 n=1 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557325363s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=59'2 lcod 42'1 mlcod 0'0 unknown NOTIFY pruub 155.439437866s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.12( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.596983910s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479125977s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.12( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.596972466s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479125977s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.10( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557232857s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439498901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.11( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557191849s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439468384s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.10( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557220459s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439498901s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.11( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557179451s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439468384s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.596843719s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479202271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.596831322s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479202271s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.2( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557081223s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439498901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.2( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557069778s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439498901s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.3( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556972504s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439483643s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.3( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556959152s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439483643s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.e( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556932449s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439529419s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.e( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556920052s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439529419s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.f( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556908607s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439529419s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.9( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556841850s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439544678s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.8( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557230949s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439941406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.8( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.557219505s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439941406s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.3( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556735039s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439468384s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.9( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556821823s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439544678s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.3( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556719780s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439468384s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.f( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556890488s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439529419s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.a( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.596465111s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479370117s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.8( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556644440s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439575195s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.a( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.596449852s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479370117s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.8( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556627274s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439575195s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.9( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556550026s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439575195s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.b( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556477547s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439605713s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.9( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556455612s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439575195s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.b( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556461334s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439605713s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.a( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556765556s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439926147s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.f( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556317329s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439620972s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.f( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556304932s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439620972s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.e( v 61'51 (0'0,61'51] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595964432s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=59'49 lcod 61'50 mlcod 61'50 active pruub 156.479400635s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.e( v 61'51 (0'0,61'51] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595933914s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=59'49 lcod 61'50 mlcod 0'0 unknown NOTIFY pruub 156.479400635s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.d( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556459427s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439971924s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.d( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556447029s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439971924s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.f( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595855713s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479415894s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.d( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556361198s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439941406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.d( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556349754s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439941406s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.f( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595831871s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479415894s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.c( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556346893s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439971924s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.c( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556335449s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439971924s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.8( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595726013s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479476929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.a( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556248665s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.439987183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.a( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556237221s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439987183s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.8( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595714569s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479476929s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.b( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556143761s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439987183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.b( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556133270s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439987183s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.a( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.556745529s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439926147s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.14( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555224419s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.439270020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.3( v 61'51 (0'0,61'51] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595376015s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=59'49 lcod 61'50 mlcod 61'50 active pruub 156.479461670s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.14( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555200577s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.439270020s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.3( v 61'51 (0'0,61'51] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595344543s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=59'49 lcod 61'50 mlcod 0'0 unknown NOTIFY pruub 156.479461670s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.6( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555824280s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.440048218s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.6( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555810928s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440048218s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.5( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595129967s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479476929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.5( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.595119476s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479476929s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.7( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555717468s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.440093994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.7( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555699348s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440093994s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.5( v 61'2 (0'0,61'2] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555648804s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 42'1 mlcod 42'1 active pruub 155.440124512s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.5( v 61'2 (0'0,61'2] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555634499s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 42'1 mlcod 0'0 unknown NOTIFY pruub 155.440124512s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.6( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555521965s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.440093994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.7( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594787598s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479492188s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.6( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555420876s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440093994s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.7( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594770432s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479492188s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.5( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555303574s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.440109253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.4( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594963074s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479461670s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.5( v 48'9 (0'0,48'9] local-lis/les=56/58 n=1 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555289268s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440109253s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.4( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555464745s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.440277100s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.4( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555452347s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440277100s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.4( v 52'48 (0'0,52'48] local-lis/les=58/59 n=1 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594621658s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479461670s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.1b( v 60'2 (0'0,60'2] local-lis/les=56/58 n=1 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555233002s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=60'2 lcod 42'1 mlcod 42'1 active pruub 155.440170288s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.1b( v 60'2 (0'0,60'2] local-lis/les=56/58 n=1 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555216789s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=60'2 lcod 42'1 mlcod 0'0 unknown NOTIFY pruub 155.440170288s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.19( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594547272s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479522705s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.19( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594532013s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479522705s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1a( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594438553s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479522705s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.18( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555120468s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.440216064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1a( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594394684s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479522705s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.18( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555085182s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440216064s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.18( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.555011749s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.440246582s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.18( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554994583s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440246582s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.19( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554954529s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.440246582s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1c( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594501495s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479797363s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.1f( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554958344s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.440277100s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1c( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594488144s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479797363s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.1f( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554944992s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440277100s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1b( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594101906s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479537964s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1d( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594113350s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479553223s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.19( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554936409s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440246582s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1d( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594035149s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479553223s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1e( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.593986511s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 active pruub 156.479568481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1e( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.593973160s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479568481s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[11.1b( v 52'48 (0'0,52'48] local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=9.594033241s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=52'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.479537964s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.1d( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554683685s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.440383911s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.1d( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554673195s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440383911s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.1c( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554598808s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.440338135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.1c( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554582596s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440338135s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.12( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554494858s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.440368652s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.12( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554483414s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440368652s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.13( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554370880s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 active pruub 155.440399170s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.12( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554345131s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 active pruub 155.440383911s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[9.13( v 48'9 (0'0,48'9] local-lis/les=56/58 n=0 ec=56/47 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554359436s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=48'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440399170s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[8.12( v 42'1 (0'0,42'1] local-lis/les=56/58 n=0 ec=56/41 lis/c=56/56 les/c/f=58/58/0 sis=62 pruub=8.554332733s) [1] r=-1 lpr=62 pi=[56,62)/1 crt=42'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.440383911s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.10( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.12( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.a( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.8( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.b( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.6( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.e( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.1c( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: Creating key for client.nfs.cephfs.1.0.compute-2.cbyxlf-rgw
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: Bind address in nfs.cephfs.1.0.compute-2.cbyxlf's ganesha conf is defaulting to empty
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: Deploying daemon nfs.cephfs.1.0.compute-2.cbyxlf on compute-2
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.c( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:08 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 62 pg[12.19( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.12( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.10( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.6( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.c( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.a( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.e( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.8( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.1c( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.b( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 63 pg[12.19( empty local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:07:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:10 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.mbatwb
Jan 21 11:07:10 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.mbatwb
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 21 11:07:10 np0005590810 ceph-mgr[74671]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 21 11:07:10 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:07:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 64 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=64) [0] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 64 pg[10.2( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=64) [0] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 64 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=64) [0] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 64 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=64) [0] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 64 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=64) [0] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 64 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=64) [0] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 64 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=64) [0] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:10 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 64 pg[10.12( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=64) [0] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: Creating key for client.nfs.cephfs.2.0.compute-0.mbatwb
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 21 11:07:10 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 21 11:07:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 21 11:07:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.12( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.12( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:11 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.2( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.2( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:11 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 65 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=-1 lpr=65 pi=[58,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:12 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 20 completed events
Jan 21 11:07:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:07:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 11:07:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 21 11:07:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 21 11:07:12 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 21 11:07:12 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 21 11:07:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 21 11:07:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 11:07:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 21 11:07:13 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:13 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:13 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.mbatwb-rgw
Jan 21 11:07:13 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.mbatwb-rgw
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:07:13 np0005590810 ceph-mgr[74671]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.mbatwb's ganesha conf is defaulting to empty
Jan 21 11:07:13 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.mbatwb's ganesha conf is defaulting to empty
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:07:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:07:13 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.mbatwb on compute-0
Jan 21 11:07:13 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.mbatwb on compute-0
Jan 21 11:07:13 np0005590810 podman[95250]: 2026-01-21 16:07:13.931845418 +0000 UTC m=+0.051581994 container create fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:07:13 np0005590810 systemd[1]: Started libpod-conmon-fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55.scope.
Jan 21 11:07:14 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:07:14 np0005590810 podman[95250]: 2026-01-21 16:07:13.912004831 +0000 UTC m=+0.031741427 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:07:14 np0005590810 podman[95250]: 2026-01-21 16:07:14.021588784 +0000 UTC m=+0.141325370 container init fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:07:14 np0005590810 podman[95250]: 2026-01-21 16:07:14.030559755 +0000 UTC m=+0.150296331 container start fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:07:14 np0005590810 podman[95250]: 2026-01-21 16:07:14.034429795 +0000 UTC m=+0.154166391 container attach fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:07:14 np0005590810 competent_williamson[95267]: 167 167
Jan 21 11:07:14 np0005590810 systemd[1]: libpod-fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55.scope: Deactivated successfully.
Jan 21 11:07:14 np0005590810 conmon[95267]: conmon fbe8506ad41c1b95761b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55.scope/container/memory.events
Jan 21 11:07:14 np0005590810 podman[95250]: 2026-01-21 16:07:14.039988101 +0000 UTC m=+0.159724697 container died fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:07:14 np0005590810 systemd[1]: var-lib-containers-storage-overlay-14b6eddb2de336384f608be52456271cddb7c17b9bdd6ee8bef209806135835c-merged.mount: Deactivated successfully.
Jan 21 11:07:14 np0005590810 podman[95250]: 2026-01-21 16:07:14.077013916 +0000 UTC m=+0.196750492 container remove fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:07:14 np0005590810 systemd[1]: libpod-conmon-fbe8506ad41c1b95761b01e47b63bb8619e6f021a79b7720d4469ff6da395a55.scope: Deactivated successfully.
Jan 21 11:07:14 np0005590810 systemd[1]: Reloading.
Jan 21 11:07:14 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:07:14 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:07:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 8 remapped+peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 2 objects/s recovering
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: Rados config object exists: conf-nfs.cephfs
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: Creating key for client.nfs.cephfs.2.0.compute-0.mbatwb-rgw
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbatwb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: Bind address in nfs.cephfs.2.0.compute-0.mbatwb's ganesha conf is defaulting to empty
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: Deploying daemon nfs.cephfs.2.0.compute-0.mbatwb on compute-0
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 21 11:07:14 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.2( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.2( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=4 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=4 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=4 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:14 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 67 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=4 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:14 np0005590810 systemd[1]: Reloading.
Jan 21 11:07:14 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:07:14 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:07:14 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:07:14 np0005590810 podman[95409]: 2026-01-21 16:07:14.948786851 +0000 UTC m=+0.045674617 container create e4e2912e9b1a546ad3b2b90fc81c91da00233ed1c795ad1f86c6f350e084229f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 11:07:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a136174947388d87eb21435681362302d29333993f34c0f024850c587cdf6d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a136174947388d87eb21435681362302d29333993f34c0f024850c587cdf6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a136174947388d87eb21435681362302d29333993f34c0f024850c587cdf6d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a136174947388d87eb21435681362302d29333993f34c0f024850c587cdf6d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:15 np0005590810 podman[95409]: 2026-01-21 16:07:14.930427493 +0000 UTC m=+0.027315279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:07:15 np0005590810 podman[95409]: 2026-01-21 16:07:15.030219517 +0000 UTC m=+0.127107313 container init e4e2912e9b1a546ad3b2b90fc81c91da00233ed1c795ad1f86c6f350e084229f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 21 11:07:15 np0005590810 podman[95409]: 2026-01-21 16:07:15.037266083 +0000 UTC m=+0.134153849 container start e4e2912e9b1a546ad3b2b90fc81c91da00233ed1c795ad1f86c6f350e084229f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:07:15 np0005590810 bash[95409]: e4e2912e9b1a546ad3b2b90fc81c91da00233ed1c795ad1f86c6f350e084229f
Jan 21 11:07:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:15 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:07:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:15 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:07:15 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:07:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:15 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:07:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:15 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:07:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:15 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:07:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:15 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:07:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:15 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:15 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev a021559a-80da-4cbd-a84a-9618c61157c1 (Updating nfs.cephfs deployment (+3 -> 3))
Jan 21 11:07:15 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event a021559a-80da-4cbd-a84a-9618c61157c1 (Updating nfs.cephfs deployment (+3 -> 3)) in 16 seconds
Jan 21 11:07:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:15 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:15 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 1abb0ece-6f4c-4de5-b24c-fd68a7015952 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 21 11:07:15 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.jkqupt on compute-1
Jan 21 11:07:15 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.jkqupt on compute-1
Jan 21 11:07:15 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 21 11:07:15 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 68 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:15 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 68 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:15 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 68 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:15 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 68 pg[10.2( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:15 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 68 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:15 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 68 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:15 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 68 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=5 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:15 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 68 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=5 ec=58/49 lis/c=65/58 les/c/f=66/59/0 sis=67) [0] r=0 lpr=67 pi=[58,67)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:07:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 8 remapped+peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 1 objects/s recovering
Jan 21 11:07:16 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 21 11:07:16 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 21 11:07:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:16 np0005590810 ceph-mon[74380]: Deploying daemon haproxy.nfs.cephfs.compute-1.jkqupt on compute-1
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:07:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:16 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:07:17 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 21 completed events
Jan 21 11:07:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:07:17 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Jan 21 11:07:17 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Jan 21 11:07:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:17 np0005590810 ceph-mgr[74671]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Jan 21 11:07:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 487 B/s, 17 objects/s recovering
Jan 21 11:07:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 21 11:07:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 21 11:07:18 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Jan 21 11:07:18 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Jan 21 11:07:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 21 11:07:18 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:18 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 21 11:07:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 11:07:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 21 11:07:18 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 21 11:07:19 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 21 11:07:19 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 21 11:07:19 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 11:07:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 21 11:07:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 21 11:07:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 21 11:07:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 403 B/s, 16 objects/s recovering
Jan 21 11:07:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 21 11:07:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 21 11:07:20 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 21 11:07:20 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 21 11:07:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 21 11:07:20 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 21 11:07:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 11:07:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 21 11:07:20 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 21 11:07:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 71 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=71) [0] r=0 lpr=71 pi=[65,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 71 pg[10.5( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=71) [0] r=0 lpr=71 pi=[65,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 71 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=71) [0] r=0 lpr=71 pi=[65,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 71 pg[10.15( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=71) [0] r=0 lpr=71 pi=[65,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:21 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 21 11:07:21 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 21 11:07:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 21 11:07:21 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 72 pg[10.15( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 72 pg[10.15( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 72 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 72 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 72 pg[10.5( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 72 pg[10.5( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 72 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 72 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=72) [0]/[2] r=-1 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 21 11:07:22 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 21 11:07:22 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event b02d6365-b8a5-489e-a469-4af77bf10345 (Global Recovery Event) in 5 seconds
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:22 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.fgcddz on compute-0
Jan 21 11:07:22 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.fgcddz on compute-0
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 21 11:07:23 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 21 11:07:23 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: Deploying daemon haproxy.nfs.cephfs.compute-0.fgcddz on compute-0
Jan 21 11:07:23 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 21 11:07:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:23 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca94000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 73 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=15.232089996s) [1] r=-1 lpr=73 pi=[67,73)/1 crt=56'1081 mlcod 0'0 active pruub 177.757659912s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 73 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=15.232025146s) [1] r=-1 lpr=73 pi=[67,73)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 177.757659912s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 73 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=15.232366562s) [1] r=-1 lpr=73 pi=[67,73)/1 crt=56'1081 mlcod 0'0 active pruub 177.759048462s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 73 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=15.232323647s) [1] r=-1 lpr=73 pi=[67,73)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 177.759048462s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 73 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=15.231642723s) [1] r=-1 lpr=73 pi=[67,73)/1 crt=56'1081 mlcod 0'0 active pruub 177.759063721s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 73 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=15.231550217s) [1] r=-1 lpr=73 pi=[67,73)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 177.759063721s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 73 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=5 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=15.230910301s) [1] r=-1 lpr=73 pi=[67,73)/1 crt=56'1081 mlcod 0'0 active pruub 177.759094238s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 73 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=5 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=15.230885506s) [1] r=-1 lpr=73 pi=[67,73)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 177.759094238s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 21 11:07:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 21 11:07:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 21 11:07:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 11:07:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 21 11:07:24 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 11:07:24 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 21 11:07:24 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.5( v 73'1095 (0'0,73'1095] local-lis/les=0/0 n=6 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=74) [0] r=0 lpr=74 pi=[65,74)/1 luod=0'0 crt=68'1092 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.5( v 73'1095 (0'0,73'1095] local-lis/les=0/0 n=6 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=74) [0] r=0 lpr=74 pi=[65,74)/1 crt=68'1092 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=74) [0] r=0 lpr=74 pi=[65,74)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=74) [0] r=0 lpr=74 pi=[65,74)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=5 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 74 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=5 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:25 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 21 11:07:25 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 21 11:07:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 21 11:07:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:25 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca8c0014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:07:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:07:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:07:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:07:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:07:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:07:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 21 11:07:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 21 11:07:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 21 11:07:26 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=8 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=8 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.15( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=4 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.15( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=4 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:26 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=5 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=74) [0] r=0 lpr=74 pi=[65,74)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=5 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.5( v 73'1095 (0'0,73'1095] local-lis/les=74/75 n=6 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=74) [0] r=0 lpr=74 pi=[65,74)/1 crt=73'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:26 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 75 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=6 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[67,74)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:27 np0005590810 podman[95568]: 2026-01-21 16:07:27.069735389 +0000 UTC m=+3.815894390 container create cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661 (image=quay.io/ceph/haproxy:2.3, name=sweet_jackson)
Jan 21 11:07:27 np0005590810 podman[95568]: 2026-01-21 16:07:27.050579852 +0000 UTC m=+3.796738882 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 21 11:07:27 np0005590810 systemd[1]: Started libpod-conmon-cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661.scope.
Jan 21 11:07:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:07:27 np0005590810 podman[95568]: 2026-01-21 16:07:27.156384977 +0000 UTC m=+3.902543977 container init cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661 (image=quay.io/ceph/haproxy:2.3, name=sweet_jackson)
Jan 21 11:07:27 np0005590810 podman[95568]: 2026-01-21 16:07:27.165350562 +0000 UTC m=+3.911509542 container start cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661 (image=quay.io/ceph/haproxy:2.3, name=sweet_jackson)
Jan 21 11:07:27 np0005590810 podman[95568]: 2026-01-21 16:07:27.168412806 +0000 UTC m=+3.914571836 container attach cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661 (image=quay.io/ceph/haproxy:2.3, name=sweet_jackson)
Jan 21 11:07:27 np0005590810 sweet_jackson[95687]: 0 0
Jan 21 11:07:27 np0005590810 systemd[1]: libpod-cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661.scope: Deactivated successfully.
Jan 21 11:07:27 np0005590810 conmon[95687]: conmon cc8654f433eb3b2b7c51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661.scope/container/memory.events
Jan 21 11:07:27 np0005590810 podman[95568]: 2026-01-21 16:07:27.176424201 +0000 UTC m=+3.922583171 container died cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661 (image=quay.io/ceph/haproxy:2.3, name=sweet_jackson)
Jan 21 11:07:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8b05d1f11f75b9a6661af8163bac35fd64c3dda96b77d05348e90895a9c98eb1-merged.mount: Deactivated successfully.
Jan 21 11:07:27 np0005590810 podman[95568]: 2026-01-21 16:07:27.21974631 +0000 UTC m=+3.965905280 container remove cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661 (image=quay.io/ceph/haproxy:2.3, name=sweet_jackson)
Jan 21 11:07:27 np0005590810 systemd[1]: libpod-conmon-cc8654f433eb3b2b7c51045dfc8ec29ecec19267defa85798f369e2f6176a661.scope: Deactivated successfully.
Jan 21 11:07:27 np0005590810 systemd[1]: Reloading.
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.1d deep-scrub starts
Jan 21 11:07:27 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:07:27 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.1d deep-scrub ok
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=4 ec=58/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=15.285565376s) [1] async=[1] r=-1 lpr=76 pi=[67,76)/1 crt=56'1081 mlcod 56'1081 active pruub 181.079162598s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.16( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=4 ec=58/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=15.285209656s) [1] r=-1 lpr=76 pi=[67,76)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 181.079162598s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=6 ec=58/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=15.242762566s) [1] async=[1] r=-1 lpr=76 pi=[67,76)/1 crt=56'1081 mlcod 56'1081 active pruub 181.037216187s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.e( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=6 ec=58/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=15.242724419s) [1] r=-1 lpr=76 pi=[67,76)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 181.037216187s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=6 ec=58/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=15.284423828s) [1] async=[1] r=-1 lpr=76 pi=[67,76)/1 crt=56'1081 mlcod 56'1081 active pruub 181.079177856s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.6( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=6 ec=58/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=15.284387589s) [1] r=-1 lpr=76 pi=[67,76)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 181.079177856s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=5 ec=58/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=15.242026329s) [1] async=[1] r=-1 lpr=76 pi=[67,76)/1 crt=56'1081 mlcod 56'1081 active pruub 181.037200928s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.1e( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=5 ec=58/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=15.241985321s) [1] r=-1 lpr=76 pi=[67,76)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 181.037200928s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.15( v 56'1081 (0'0,56'1081] local-lis/les=75/76 n=4 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=75/76 n=8 ec=58/49 lis/c=72/65 les/c/f=73/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.18( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=76) [0] r=0 lpr=76 pi=[58,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:27 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 76 pg[10.8( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=76) [0] r=0 lpr=76 pi=[58,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 11:07:27 np0005590810 systemd[1]: Reloading.
Jan 21 11:07:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:27 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca74000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:27 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:07:27 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:07:27 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 22 completed events
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 21 11:07:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:07:27 np0005590810 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.fgcddz for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:07:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 21 11:07:28 np0005590810 podman[95832]: 2026-01-21 16:07:28.110050122 +0000 UTC m=+0.030768754 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 21 11:07:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 4 remapped+peering, 2 peering, 4 active+remapped, 343 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 7 objects/s recovering
Jan 21 11:07:28 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 21 11:07:28 np0005590810 podman[95832]: 2026-01-21 16:07:28.30465122 +0000 UTC m=+0.225369832 container create 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:07:28 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 77 pg[10.8( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[58,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:28 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 77 pg[10.8( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[58,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:28 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 77 pg[10.18( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[58,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:28 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 77 pg[10.18( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[58,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:28 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 21 11:07:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabf14d686b7a8a36a6917c0e5c2c7b6160868d333ff9a580470ea2686377532/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:28 np0005590810 podman[95832]: 2026-01-21 16:07:28.392344099 +0000 UTC m=+0.313062721 container init 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:07:28 np0005590810 podman[95832]: 2026-01-21 16:07:28.398971172 +0000 UTC m=+0.319689784 container start 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:07:28 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 21 11:07:28 np0005590810 bash[95832]: 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0
Jan 21 11:07:28 np0005590810 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.fgcddz for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:07:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [NOTICE] 020/160728 (2) : New worker #1 (4) forked
Jan 21 11:07:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:29 np0005590810 ceph-mgr[74671]: [progress WARNING root] Starting Global Recovery Event,10 pgs not in active + clean state
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 21 11:07:29 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 21 11:07:29 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:29 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.jsxguj on compute-2
Jan 21 11:07:29 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.jsxguj on compute-2
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 21 11:07:29 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 21 11:07:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:29 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca94000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:29 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca80000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:30 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:30 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:30 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:30 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:30 np0005590810 ceph-mon[74380]: Deploying daemon haproxy.nfs.cephfs.compute-2.jsxguj on compute-2
Jan 21 11:07:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 4 remapped+peering, 2 peering, 4 active+remapped, 343 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 72 B/s, 9 objects/s recovering
Jan 21 11:07:30 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 21 11:07:30 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 21 11:07:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 21 11:07:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 21 11:07:30 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 21 11:07:30 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 79 pg[10.18( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=77/58 les/c/f=78/59/0 sis=79) [0] r=0 lpr=79 pi=[58,79)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:30 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 79 pg[10.18( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=77/58 les/c/f=78/59/0 sis=79) [0] r=0 lpr=79 pi=[58,79)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:30 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 79 pg[10.8( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=77/58 les/c/f=78/59/0 sis=79) [0] r=0 lpr=79 pi=[58,79)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:30 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 79 pg[10.8( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=6 ec=58/49 lis/c=77/58 les/c/f=78/59/0 sis=79) [0] r=0 lpr=79 pi=[58,79)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:31 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 21 11:07:31 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 21 11:07:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 21 11:07:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 21 11:07:31 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 21 11:07:31 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 80 pg[10.8( v 56'1081 (0'0,56'1081] local-lis/les=79/80 n=6 ec=58/49 lis/c=77/58 les/c/f=78/59/0 sis=79) [0] r=0 lpr=79 pi=[58,79)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:31 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 80 pg[10.18( v 56'1081 (0'0,56'1081] local-lis/les=79/80 n=5 ec=58/49 lis/c=77/58 les/c/f=78/59/0 sis=79) [0] r=0 lpr=79 pi=[58,79)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:31 np0005590810 python3[95887]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:07:31 np0005590810 podman[95888]: 2026-01-21 16:07:31.563944579 +0000 UTC m=+0.041568365 container create 0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7 (image=quay.io/ceph/ceph:v19, name=focused_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:07:31 np0005590810 systemd[1]: Started libpod-conmon-0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7.scope.
Jan 21 11:07:31 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:07:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36572b926375a025f9b6ca9999fa27daaf040c158c17219821af83f65a36f095/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36572b926375a025f9b6ca9999fa27daaf040c158c17219821af83f65a36f095/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:31 np0005590810 podman[95888]: 2026-01-21 16:07:31.63993233 +0000 UTC m=+0.117556146 container init 0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7 (image=quay.io/ceph/ceph:v19, name=focused_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 21 11:07:31 np0005590810 podman[95888]: 2026-01-21 16:07:31.546772902 +0000 UTC m=+0.024396708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:07:31 np0005590810 podman[95888]: 2026-01-21 16:07:31.648003877 +0000 UTC m=+0.125627663 container start 0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7 (image=quay.io/ceph/ceph:v19, name=focused_cartwright, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:07:31 np0005590810 podman[95888]: 2026-01-21 16:07:31.652259827 +0000 UTC m=+0.129883793 container attach 0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7 (image=quay.io/ceph/ceph:v19, name=focused_cartwright, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:07:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:31 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca8c0021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:31 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:31 np0005590810 focused_cartwright[95903]: could not fetch user info: no user info saved
Jan 21 11:07:32 np0005590810 systemd[1]: libpod-0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7.scope: Deactivated successfully.
Jan 21 11:07:32 np0005590810 conmon[95903]: conmon 0ae61d4783f943276908 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7.scope/container/memory.events
Jan 21 11:07:32 np0005590810 podman[95888]: 2026-01-21 16:07:32.126522601 +0000 UTC m=+0.604146387 container died 0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7 (image=quay.io/ceph/ceph:v19, name=focused_cartwright, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:07:32 np0005590810 systemd[1]: var-lib-containers-storage-overlay-36572b926375a025f9b6ca9999fa27daaf040c158c17219821af83f65a36f095-merged.mount: Deactivated successfully.
Jan 21 11:07:32 np0005590810 podman[95888]: 2026-01-21 16:07:32.163794234 +0000 UTC m=+0.641418020 container remove 0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7 (image=quay.io/ceph/ceph:v19, name=focused_cartwright, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:07:32 np0005590810 systemd[1]: libpod-conmon-0ae61d4783f94327690832da844802373ebd6b1bfacaa129730243ef5d78e1e7.scope: Deactivated successfully.
Jan 21 11:07:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 4 remapped+peering, 2 peering, 4 active+remapped, 343 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 21 11:07:32 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 21 11:07:32 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 21 11:07:32 np0005590810 python3[96027]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid d9745984-fea8-5195-8ec5-61f685b5c785 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:07:32 np0005590810 podman[96028]: 2026-01-21 16:07:32.570475395 +0000 UTC m=+0.078198529 container create 7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a (image=quay.io/ceph/ceph:v19, name=agitated_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:07:32 np0005590810 systemd[1]: Started libpod-conmon-7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a.scope.
Jan 21 11:07:32 np0005590810 podman[96028]: 2026-01-21 16:07:32.514744266 +0000 UTC m=+0.022467420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:07:32 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:07:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde9f55651541f6deaef7332281df9d6f0040866e78927d36430dacfd49b4c00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde9f55651541f6deaef7332281df9d6f0040866e78927d36430dacfd49b4c00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:32 np0005590810 podman[96028]: 2026-01-21 16:07:32.644758163 +0000 UTC m=+0.152481347 container init 7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a (image=quay.io/ceph/ceph:v19, name=agitated_edison, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:07:32 np0005590810 podman[96028]: 2026-01-21 16:07:32.650751787 +0000 UTC m=+0.158474921 container start 7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a (image=quay.io/ceph/ceph:v19, name=agitated_edison, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 21 11:07:32 np0005590810 podman[96028]: 2026-01-21 16:07:32.654789131 +0000 UTC m=+0.162512275 container attach 7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a (image=quay.io/ceph/ceph:v19, name=agitated_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 21 11:07:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:32 np0005590810 agitated_edison[96044]: {
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "user_id": "openstack",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "display_name": "openstack",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "email": "",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "suspended": 0,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "max_buckets": 1000,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "subusers": [],
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "keys": [
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        {
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:            "user": "openstack",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:            "access_key": "QI8H3FD6WOHI75OSF110",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:            "secret_key": "WcSyAUMbQKOaMwOZpj5D5gM1CRh8l8UBsU34QomI",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:            "active": true,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:            "create_date": "2026-01-21T16:07:32.898014Z"
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        }
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    ],
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "swift_keys": [],
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "caps": [],
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "op_mask": "read, write, delete",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "default_placement": "",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "default_storage_class": "",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "placement_tags": [],
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "bucket_quota": {
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "enabled": false,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "check_on_raw": false,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "max_size": -1,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "max_size_kb": 0,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "max_objects": -1
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    },
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "user_quota": {
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "enabled": false,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "check_on_raw": false,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "max_size": -1,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "max_size_kb": 0,
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:        "max_objects": -1
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    },
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "temp_url_keys": [],
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "type": "rgw",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "mfa_ids": [],
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "account_id": "",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "path": "/",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "create_date": "2026-01-21T16:07:32.896627Z",
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "tags": [],
Jan 21 11:07:32 np0005590810 agitated_edison[96044]:    "group_ids": []
Jan 21 11:07:32 np0005590810 agitated_edison[96044]: }
Jan 21 11:07:32 np0005590810 agitated_edison[96044]: 
Jan 21 11:07:32 np0005590810 systemd[1]: libpod-7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a.scope: Deactivated successfully.
Jan 21 11:07:32 np0005590810 podman[96028]: 2026-01-21 16:07:32.978281631 +0000 UTC m=+0.486004775 container died 7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a (image=quay.io/ceph/ceph:v19, name=agitated_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:07:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dde9f55651541f6deaef7332281df9d6f0040866e78927d36430dacfd49b4c00-merged.mount: Deactivated successfully.
Jan 21 11:07:33 np0005590810 podman[96028]: 2026-01-21 16:07:33.016137832 +0000 UTC m=+0.523860956 container remove 7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a (image=quay.io/ceph/ceph:v19, name=agitated_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:07:33 np0005590810 systemd[1]: libpod-conmon-7acf11d0cc42ef6c3a9da9e11e1485468b8464089ae3a9df7f0737c66a145d6a.scope: Deactivated successfully.
Jan 21 11:07:33 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 21 11:07:33 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 21 11:07:33 np0005590810 python3[96166]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:07:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:33 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca940021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:33 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca80001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:33 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:54014] [GET] [200] [0.118s] [6.3K] [ebf7d115-556f-468f-9ac3-b9d8d4565671] /
Jan 21 11:07:34 np0005590810 python3[96190]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:07:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 40 B/s, 0 objects/s recovering
Jan 21 11:07:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 21 11:07:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 21 11:07:34 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:54018] [GET] [200] [0.002s] [6.3K] [7f174c2f-3580-4ee7-acac-805726dc08b9] /
Jan 21 11:07:34 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 21 11:07:34 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 21 11:07:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 21 11:07:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 11:07:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 21 11:07:34 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 21 11:07:34 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 21 11:07:35 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 21 11:07:35 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 21 11:07:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:35 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca8c0021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:35 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 21 11:07:35 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 11:07:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 21 11:07:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 21 11:07:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 B/s, 0 objects/s recovering
Jan 21 11:07:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 21 11:07:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 21 11:07:36 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 21 11:07:36 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:37 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.orwvcp on compute-1
Jan 21 11:07:37 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.orwvcp on compute-1
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 83 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=9 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=83 pruub=9.921514511s) [1] r=-1 lpr=83 pi=[67,83)/1 crt=56'1081 mlcod 0'0 active pruub 185.758422852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 83 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=9 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=83 pruub=9.921472549s) [1] r=-1 lpr=83 pi=[67,83)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 185.758422852s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 83 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=83 pruub=9.921339989s) [1] r=-1 lpr=83 pi=[67,83)/1 crt=56'1081 mlcod 0'0 active pruub 185.759246826s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 83 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=83 pruub=9.921064377s) [1] r=-1 lpr=83 pi=[67,83)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 185.759246826s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:37 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca940021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 21 11:07:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:37 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca80001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 21 11:07:37 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 84 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=9 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=84) [1]/[0] r=0 lpr=84 pi=[67,84)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 84 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=9 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=84) [1]/[0] r=0 lpr=84 pi=[67,84)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 84 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=84) [1]/[0] r=0 lpr=84 pi=[67,84)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:37 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 84 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=84) [1]/[0] r=0 lpr=84 pi=[67,84)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 2 op/s
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: Deploying daemon keepalived.nfs.cephfs.compute-1.orwvcp on compute-1
Jan 21 11:07:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:38 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca8c0021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:38 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 21 11:07:38 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 21 11:07:38 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 21 11:07:38 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 85 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=84/85 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[67,84)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:38 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 85 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=84/85 n=9 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[67,84)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:39 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 21 11:07:39 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 21 11:07:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:39 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:39 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca8c0021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 21 11:07:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 21 11:07:40 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 21 11:07:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 2 op/s
Jan 21 11:07:40 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 86 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=84/85 n=4 ec=58/49 lis/c=84/67 les/c/f=85/68/0 sis=86 pruub=14.561050415s) [1] async=[1] r=-1 lpr=86 pi=[67,86)/1 crt=56'1081 mlcod 56'1081 active pruub 193.181488037s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:40 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 86 pg[10.1a( v 56'1081 (0'0,56'1081] local-lis/les=84/85 n=4 ec=58/49 lis/c=84/67 les/c/f=85/68/0 sis=86 pruub=14.560973167s) [1] r=-1 lpr=86 pi=[67,86)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 193.181488037s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:40 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 86 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=84/85 n=9 ec=58/49 lis/c=84/67 les/c/f=85/68/0 sis=86 pruub=14.560837746s) [1] async=[1] r=-1 lpr=86 pi=[67,86)/1 crt=56'1081 mlcod 56'1081 active pruub 193.181564331s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:40 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 86 pg[10.a( v 56'1081 (0'0,56'1081] local-lis/les=84/85 n=9 ec=58/49 lis/c=84/67 les/c/f=85/68/0 sis=86 pruub=14.560744286s) [1] r=-1 lpr=86 pi=[67,86)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 193.181564331s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:40 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca940021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:40 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 21 11:07:40 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 21 11:07:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 21 11:07:41 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 21 11:07:41 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 21 11:07:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 21 11:07:41 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 21 11:07:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:41 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca80002b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:41 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca74002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:42 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca8c0021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:42 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 21 11:07:42 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 21 11:07:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:43 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 21 11:07:43 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 21 11:07:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:43 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca940095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:07:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[95425]: 21/01/2026 16:07:43 : epoch 6970f9b3 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fca80002b60 fd 38 proxy ignored for local
Jan 21 11:07:43 np0005590810 kernel: ganesha.nfsd[95472]: segfault at 50 ip 00007fcb1d72632e sp 00007fca92ffc210 error 4 in libntirpc.so.5.8[7fcb1d70b000+2c000] likely on CPU 1 (core 0, socket 1)
Jan 21 11:07:43 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:07:43 np0005590810 systemd[1]: Created slice Slice /system/systemd-coredump.
Jan 21 11:07:43 np0005590810 systemd[1]: Started Process Core Dump (PID 96198/UID 0).
Jan 21 11:07:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 4 objects/s recovering
Jan 21 11:07:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 21 11:07:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 21 11:07:44 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 21 11:07:44 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 21 11:07:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 21 11:07:44 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 21 11:07:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 11:07:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 21 11:07:44 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 21 11:07:45 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 21 11:07:45 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:07:45 np0005590810 systemd-coredump[96199]: Process 95429 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007fcb1d72632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007fcb1d730900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 21 11:07:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:45 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:45 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:45 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:45 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:45 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:45 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:45 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.cjwjsm on compute-2
Jan 21 11:07:45 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.cjwjsm on compute-2
Jan 21 11:07:45 np0005590810 systemd[1]: systemd-coredump@0-96198-0.service: Deactivated successfully.
Jan 21 11:07:45 np0005590810 systemd[1]: systemd-coredump@0-96198-0.service: Consumed 2.071s CPU time.
Jan 21 11:07:45 np0005590810 podman[96204]: 2026-01-21 16:07:45.996060382 +0000 UTC m=+0.033853079 container died e4e2912e9b1a546ad3b2b90fc81c91da00233ed1c795ad1f86c6f350e084229f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:07:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay-71a136174947388d87eb21435681362302d29333993f34c0f024850c587cdf6d-merged.mount: Deactivated successfully.
Jan 21 11:07:46 np0005590810 podman[96204]: 2026-01-21 16:07:46.042144805 +0000 UTC m=+0.079937472 container remove e4e2912e9b1a546ad3b2b90fc81c91da00233ed1c795ad1f86c6f350e084229f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 21 11:07:46 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:07:46 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:07:46 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 2.276s CPU time.
Jan 21 11:07:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 4 objects/s recovering
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 21 11:07:46 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 21 11:07:46 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 21 11:07:46 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 21 11:07:47 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 21 11:07:47 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 21 11:07:47 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:47 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:47 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:47 np0005590810 ceph-mon[74380]: Deploying daemon keepalived.nfs.cephfs.compute-2.cjwjsm on compute-2
Jan 21 11:07:47 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 11:07:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 21 11:07:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 21 11:07:47 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 21 11:07:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 2 unknown, 2 remapped+peering, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:48 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 21 11:07:48 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 21 11:07:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 21 11:07:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 21 11:07:48 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 21 11:07:49 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 21 11:07:49 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 21 11:07:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 21 11:07:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 2 unknown, 2 remapped+peering, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:50 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Jan 21 11:07:50 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Jan 21 11:07:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 21 11:07:50 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 21 11:07:51 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Jan 21 11:07:51 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Jan 21 11:07:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 21 11:07:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 21 11:07:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 21 11:07:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/160751 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:07:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 2 unknown, 2 remapped+peering, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:52 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Jan 21 11:07:52 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Jan 21 11:07:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:53 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.c scrub starts
Jan 21 11:07:53 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.c scrub ok
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 170 B/s wr, 13 op/s; 73 B/s, 4 objects/s recovering
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 21 11:07:54 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.a scrub starts
Jan 21 11:07:54 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.a scrub ok
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 21 11:07:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.mqubfc on compute-0
Jan 21 11:07:54 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.mqubfc on compute-0
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.e scrub starts
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.e scrub ok
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 95 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=75/76 n=8 ec=58/49 lis/c=75/75 les/c/f=76/76/0 sis=95 pruub=11.854091644s) [1] r=-1 lpr=95 pi=[75,95)/1 crt=56'1081 mlcod 0'0 active pruub 205.798873901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 95 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=5 ec=58/49 lis/c=74/74 les/c/f=75/75/0 sis=95 pruub=11.092589378s) [1] r=-1 lpr=95 pi=[74,95)/1 crt=56'1081 mlcod 0'0 active pruub 205.037841797s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 95 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=5 ec=58/49 lis/c=74/74 les/c/f=75/75/0 sis=95 pruub=11.092545509s) [1] r=-1 lpr=95 pi=[74,95)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 205.037841797s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 95 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=75/76 n=8 ec=58/49 lis/c=75/75 les/c/f=76/76/0 sis=95 pruub=11.853294373s) [1] r=-1 lpr=95 pi=[75,95)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 205.798873901s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 11:07:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:55 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:07:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 21 11:07:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 21 11:07:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 96 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=75/76 n=8 ec=58/49 lis/c=75/75 les/c/f=76/76/0 sis=96) [1]/[0] r=0 lpr=96 pi=[75,96)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 96 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=75/76 n=8 ec=58/49 lis/c=75/75 les/c/f=76/76/0 sis=96) [1]/[0] r=0 lpr=96 pi=[75,96)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 96 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=5 ec=58/49 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[0] r=0 lpr=96 pi=[74,96)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:55 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 96 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=74/75 n=5 ec=58/49 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[0] r=0 lpr=96 pi=[74,96)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:07:56 np0005590810 systemd-logind[795]: New session 37 of user zuul.
Jan 21 11:07:56 np0005590810 systemd[1]: Started Session 37 of User zuul.
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:07:56
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', '.mgr', '.rgw.root']
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 179 B/s wr, 14 op/s; 77 B/s, 5 objects/s recovering
Jan 21 11:07:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 21 11:07:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:07:56 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 1.
Jan 21 11:07:56 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:07:56 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 2.276s CPU time.
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:07:56 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:07:56 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:07:56 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Jan 21 11:07:56 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Jan 21 11:07:56 np0005590810 podman[96464]: 2026-01-21 16:07:56.553972048 +0000 UTC m=+0.028222616 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:07:56 np0005590810 podman[96464]: 2026-01-21 16:07:56.68481002 +0000 UTC m=+0.159060568 container create 2c38abf31015215008bf4a63a17bab99d0f193ec9af435bb4ca0778f31a42759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:07:57 np0005590810 python3.9[96575]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 21 11:07:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a613102054f9f68f6c22edefe80b177a73e5176f25e41b8d8aa05d2b4e5b86e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a613102054f9f68f6c22edefe80b177a73e5176f25e41b8d8aa05d2b4e5b86e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a613102054f9f68f6c22edefe80b177a73e5176f25e41b8d8aa05d2b4e5b86e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:57 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a613102054f9f68f6c22edefe80b177a73e5176f25e41b8d8aa05d2b4e5b86e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:07:57 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Jan 21 11:07:57 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: Deploying daemon keepalived.nfs.cephfs.compute-0.mqubfc on compute-0
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 21 11:07:57 np0005590810 podman[96464]: 2026-01-21 16:07:57.649961917 +0000 UTC m=+1.124212485 container init 2c38abf31015215008bf4a63a17bab99d0f193ec9af435bb4ca0778f31a42759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:07:57 np0005590810 podman[96464]: 2026-01-21 16:07:57.656382355 +0000 UTC m=+1.130632903 container start 2c38abf31015215008bf4a63a17bab99d0f193ec9af435bb4ca0778f31a42759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:07:57 np0005590810 bash[96464]: 2c38abf31015215008bf4a63a17bab99d0f193ec9af435bb4ca0778f31a42759
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 21 11:07:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:07:57 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:07:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:07:57 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:07:57 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:07:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 21 11:07:57 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 97 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=96/97 n=8 ec=58/49 lis/c=75/75 les/c/f=76/76/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[75,96)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:57 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 97 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=96/97 n=5 ec=58/49 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[74,96)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:07:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:07:57 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:07:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:07:57 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:07:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:07:57 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:07:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:07:57 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:07:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:07:57 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:07:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:07:57 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:07:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:07:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 21 11:07:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 21 11:07:58 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 21 11:07:58 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 98 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=96/97 n=8 ec=58/49 lis/c=96/75 les/c/f=97/76/0 sis=98 pruub=15.550769806s) [1] async=[1] r=-1 lpr=98 pi=[75,98)/1 crt=56'1081 mlcod 56'1081 active pruub 212.024368286s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:58 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 98 pg[10.d( v 56'1081 (0'0,56'1081] local-lis/les=96/97 n=8 ec=58/49 lis/c=96/75 les/c/f=97/76/0 sis=98 pruub=15.550668716s) [1] r=-1 lpr=98 pi=[75,98)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 212.024368286s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:58 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 98 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=96/97 n=5 ec=58/49 lis/c=96/74 les/c/f=97/75/0 sis=98 pruub=15.549856186s) [1] async=[1] r=-1 lpr=98 pi=[74,98)/1 crt=56'1081 mlcod 56'1081 active pruub 212.024536133s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:07:58 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 98 pg[10.1d( v 56'1081 (0'0,56'1081] local-lis/les=96/97 n=5 ec=58/49 lis/c=96/74 les/c/f=97/75/0 sis=98 pruub=15.549716949s) [1] r=-1 lpr=98 pi=[74,98)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 212.024536133s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:07:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:07:58 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.b scrub starts
Jan 21 11:07:58 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.b scrub ok
Jan 21 11:07:58 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 11:07:58 np0005590810 python3.9[96872]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:07:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 21 11:07:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 21 11:07:59 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 21 11:07:59 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Jan 21 11:07:59 np0005590810 podman[96338]: 2026-01-21 16:07:59.501404014 +0000 UTC m=+4.024597009 container create d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_murdock, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.component=keepalived-container, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 21 11:07:59 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Jan 21 11:07:59 np0005590810 podman[96338]: 2026-01-21 16:07:59.487408405 +0000 UTC m=+4.010601420 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 21 11:07:59 np0005590810 systemd[90084]: Starting Mark boot as successful...
Jan 21 11:07:59 np0005590810 systemd[90084]: Finished Mark boot as successful.
Jan 21 11:07:59 np0005590810 systemd[1]: Started libpod-conmon-d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3.scope.
Jan 21 11:07:59 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:07:59 np0005590810 podman[96338]: 2026-01-21 16:07:59.579612362 +0000 UTC m=+4.102805347 container init d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_murdock, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, architecture=x86_64, version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, release=1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 21 11:07:59 np0005590810 podman[96338]: 2026-01-21 16:07:59.590413143 +0000 UTC m=+4.113606118 container start d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_murdock, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 21 11:07:59 np0005590810 podman[96338]: 2026-01-21 16:07:59.59388081 +0000 UTC m=+4.117073795 container attach d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_murdock, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-type=git, name=keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.buildah.version=1.28.2, vendor=Red Hat, Inc., version=2.2.4)
Jan 21 11:07:59 np0005590810 flamboyant_murdock[96913]: 0 0
Jan 21 11:07:59 np0005590810 systemd[1]: libpod-d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3.scope: Deactivated successfully.
Jan 21 11:07:59 np0005590810 podman[96338]: 2026-01-21 16:07:59.597393717 +0000 UTC m=+4.120586702 container died d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_murdock, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, version=2.2.4, release=1793, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Jan 21 11:07:59 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6bf792073062efeb6b6687234ae57c31f6b409115f90f3d2a5470fe513f73ead-merged.mount: Deactivated successfully.
Jan 21 11:07:59 np0005590810 podman[96338]: 2026-01-21 16:07:59.636222518 +0000 UTC m=+4.159415503 container remove d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_murdock, io.openshift.expose-services=, io.buildah.version=1.28.2, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 21 11:07:59 np0005590810 systemd[1]: libpod-conmon-d185eafbcdf50e8a96e66f93aacacee3ea39763692208e62992c6dcf99ee54b3.scope: Deactivated successfully.
Jan 21 11:07:59 np0005590810 systemd[1]: Reloading.
Jan 21 11:07:59 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:07:59 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:00 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:00 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:00 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:00 np0005590810 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.mqubfc for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:00 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 21 11:08:00 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 21 11:08:00 np0005590810 podman[97062]: 2026-01-21 16:08:00.560661926 +0000 UTC m=+0.062471346 container create e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Jan 21 11:08:00 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675c7112a779da25854d803e83cb7fabf8b1e8e9d8f7f3cb697acaf38055b981/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:00 np0005590810 podman[97062]: 2026-01-21 16:08:00.620227953 +0000 UTC m=+0.122037403 container init e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, architecture=x86_64, description=keepalived for Ceph, vcs-type=git, distribution-scope=public, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2)
Jan 21 11:08:00 np0005590810 podman[97062]: 2026-01-21 16:08:00.627165396 +0000 UTC m=+0.128974806 container start e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64)
Jan 21 11:08:00 np0005590810 bash[97062]: e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a
Jan 21 11:08:00 np0005590810 podman[97062]: 2026-01-21 16:08:00.536916169 +0000 UTC m=+0.038725659 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 21 11:08:00 np0005590810 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.mqubfc for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: Starting VRRP child process, pid=4
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: Startup complete
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: (VI_0) Entering BACKUP STATE (init)
Jan 21 11:08:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:00 2026: VRRP_Script(check_backend) succeeded
Jan 21 11:08:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 21 11:08:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:00 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 1abb0ece-6f4c-4de5-b24c-fd68a7015952 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 21 11:08:00 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 1abb0ece-6f4c-4de5-b24c-fd68a7015952 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 46 seconds
Jan 21 11:08:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 21 11:08:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev c76068c3-362f-41f2-acb4-694e40b0b6a2 (Updating alertmanager deployment (+1 -> 1))
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:08:01 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:08:01 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 21 11:08:01 np0005590810 ceph-osd[82794]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 21 11:08:02 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:02 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:02 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:02 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.214566361 +0000 UTC m=+1.569687427 volume create d752ee9193e9886357bcfa2b809f8afe608ce5fcb89522a1eba1705f0e426125
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.225853697 +0000 UTC m=+1.580974763 container create 98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=peaceful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 ceph-mon[74380]: Deploying daemon alertmanager.compute-0 on compute-0
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.197939811 +0000 UTC m=+1.553060897 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 21 11:08:03 np0005590810 systemd[1]: Started libpod-conmon-98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8.scope.
Jan 21 11:08:03 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc727f107b5dec55ec5cc4fef4288379b7a4b29a13eccbd8f8060412cd357891/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.334264772 +0000 UTC m=+1.689385858 container init 98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=peaceful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.344439603 +0000 UTC m=+1.699560669 container start 98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=peaceful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.349504138 +0000 UTC m=+1.704625244 container attach 98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=peaceful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 peaceful_borg[97314]: 65534 65534
Jan 21 11:08:03 np0005590810 systemd[1]: libpod-98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8.scope: Deactivated successfully.
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.351288214 +0000 UTC m=+1.706409290 container died 98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=peaceful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dc727f107b5dec55ec5cc4fef4288379b7a4b29a13eccbd8f8060412cd357891-merged.mount: Deactivated successfully.
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.402418151 +0000 UTC m=+1.757539237 container remove 98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=peaceful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 podman[97179]: 2026-01-21 16:08:03.406782295 +0000 UTC m=+1.761903371 volume remove d752ee9193e9886357bcfa2b809f8afe608ce5fcb89522a1eba1705f0e426125
Jan 21 11:08:03 np0005590810 systemd[1]: libpod-conmon-98c6190a4e8b355012c1f0c19d1b2ace280a1869b418ead610bde64a83fe57b8.scope: Deactivated successfully.
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.474268615 +0000 UTC m=+0.044920799 volume create 6906cb8c7cc4abacaced6b5744e70bef6222a13a796ad12c2b72dbbbd698d56e
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.485033245 +0000 UTC m=+0.055685419 container create c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=exciting_wilson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 systemd[1]: Started libpod-conmon-c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6.scope.
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.459058188 +0000 UTC m=+0.029710392 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 21 11:08:03 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579de3e6d8d67be6df698934a0058e3108002d8516facd605ee46311eb75126e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.573089655 +0000 UTC m=+0.143741869 container init c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=exciting_wilson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.579998257 +0000 UTC m=+0.150650441 container start c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=exciting_wilson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 exciting_wilson[97348]: 65534 65534
Jan 21 11:08:03 np0005590810 systemd[1]: libpod-c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6.scope: Deactivated successfully.
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.58434055 +0000 UTC m=+0.154992734 container attach c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=exciting_wilson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.584926918 +0000 UTC m=+0.155579092 container died c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=exciting_wilson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 systemd[1]: var-lib-containers-storage-overlay-579de3e6d8d67be6df698934a0058e3108002d8516facd605ee46311eb75126e-merged.mount: Deactivated successfully.
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.623586244 +0000 UTC m=+0.194238428 container remove c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=exciting_wilson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:03 np0005590810 podman[97331]: 2026-01-21 16:08:03.627443782 +0000 UTC m=+0.198095976 volume remove 6906cb8c7cc4abacaced6b5744e70bef6222a13a796ad12c2b72dbbbd698d56e
Jan 21 11:08:03 np0005590810 systemd[1]: libpod-conmon-c530290dc9aa544dd981db364bff0f839565a0a09255c861db4c36f0f809cff6.scope: Deactivated successfully.
Jan 21 11:08:03 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:03 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:03 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:03 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:08:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:03 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:08:04 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:04 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:04 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:04 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 23 completed events
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:08:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:04 2026: (VI_0) Entering MASTER STATE
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 21 11:08:04 np0005590810 systemd[1]: Starting Ceph alertmanager.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 21 11:08:04 np0005590810 podman[97493]: 2026-01-21 16:08:04.516848846 +0000 UTC m=+0.029569528 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:04 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 957ebf1e-5680-4476-bbec-353675feef3b (Global Recovery Event) in 36 seconds
Jan 21 11:08:04 np0005590810 podman[97493]: 2026-01-21 16:08:04.663867395 +0000 UTC m=+0.176588017 volume create fc0bbe8d4d755110c76dfe8e47f4663ca949c121ab4ebe1a937ea76269d98e42
Jan 21 11:08:04 np0005590810 podman[97493]: 2026-01-21 16:08:04.676820192 +0000 UTC m=+0.189540824 container create 8b88c706f1c281ed839a461eb527042d837bac9b6eb951b300d6634e57c39e36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 21 11:08:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf90896e5c47b76a17693b905bfa012c197a8202311d88c1fc9b37583433f8b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf90896e5c47b76a17693b905bfa012c197a8202311d88c1fc9b37583433f8b/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:04 np0005590810 podman[97493]: 2026-01-21 16:08:04.747978795 +0000 UTC m=+0.260699447 container init 8b88c706f1c281ed839a461eb527042d837bac9b6eb951b300d6634e57c39e36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:04 np0005590810 podman[97493]: 2026-01-21 16:08:04.754151844 +0000 UTC m=+0.266872476 container start 8b88c706f1c281ed839a461eb527042d837bac9b6eb951b300d6634e57c39e36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:04 np0005590810 bash[97493]: 8b88c706f1c281ed839a461eb527042d837bac9b6eb951b300d6634e57c39e36
Jan 21 11:08:04 np0005590810 systemd[1]: Started Ceph alertmanager.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:04.781Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:04.781Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:04.790Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:04.792Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:04.828Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:04.828Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:04.833Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 21 11:08:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:04.833Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:04 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev c76068c3-362f-41f2-acb4-694e40b0b6a2 (Updating alertmanager deployment (+1 -> 1))
Jan 21 11:08:04 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event c76068c3-362f-41f2-acb4-694e40b0b6a2 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 21 11:08:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:04 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev a731cdda-06df-4cfd-83c9-1fe2b40340e3 (Updating grafana deployment (+1 -> 1))
Jan 21 11:08:04 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Jan 21 11:08:04 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 21 11:08:05 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:05 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Jan 21 11:08:05 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Jan 21 11:08:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 21 11:08:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 21 11:08:06 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:06.793Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000588869s
Jan 21 11:08:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 21 11:08:07 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 21 11:08:07 np0005590810 ceph-mon[74380]: Regenerating cephadm self-signed grafana TLS certificates
Jan 21 11:08:07 np0005590810 ceph-mon[74380]: Deploying daemon grafana.compute-0 on compute-0
Jan 21 11:08:07 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 21 11:08:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 21 11:08:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 21 11:08:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 21 11:08:08 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 21 11:08:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:08 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.181061) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011689181256, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7889, "num_deletes": 251, "total_data_size": 10797540, "memory_usage": 11102928, "flush_reason": "Manual Compaction"}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011689249861, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8799578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 147, "largest_seqno": 8027, "table_properties": {"data_size": 8770453, "index_size": 18578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 91977, "raw_average_key_size": 24, "raw_value_size": 8698319, "raw_average_value_size": 2301, "num_data_blocks": 819, "num_entries": 3780, "num_filter_entries": 3780, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011371, "oldest_key_time": 1769011371, "file_creation_time": 1769011689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 68852 microseconds, and 19740 cpu microseconds.
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.249940) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8799578 bytes OK
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.249972) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.251558) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.251576) EVENT_LOG_v1 {"time_micros": 1769011689251571, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.251604) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10761638, prev total WAL file size 10761638, number of live WAL files 2.
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.253926) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(8593KB) 13(58KB) 8(1944B)]
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011689254027, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8861149, "oldest_snapshot_seqno": -1}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3598 keys, 8814415 bytes, temperature: kUnknown
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011689324811, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8814415, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8785651, "index_size": 18670, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9029, "raw_key_size": 90149, "raw_average_key_size": 25, "raw_value_size": 8715020, "raw_average_value_size": 2422, "num_data_blocks": 825, "num_entries": 3598, "num_filter_entries": 3598, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769011689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.325427) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8814415 bytes
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.408931) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.6 rd, 124.0 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(8.5, 0.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3890, records dropped: 292 output_compression: NoCompression
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.408985) EVENT_LOG_v1 {"time_micros": 1769011689408963, "job": 4, "event": "compaction_finished", "compaction_time_micros": 71104, "compaction_time_cpu_micros": 20004, "output_level": 6, "num_output_files": 1, "total_output_size": 8814415, "num_input_records": 3890, "num_output_records": 3598, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011689410906, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011689410996, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011689411037, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:09.253789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:08:09 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 25 completed events
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:08:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:08:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:09 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:08:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 21 11:08:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:10 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:10 np0005590810 systemd[1]: session-37.scope: Deactivated successfully.
Jan 21 11:08:10 np0005590810 systemd[1]: session-37.scope: Consumed 8.801s CPU time.
Jan 21 11:08:10 np0005590810 systemd-logind[795]: Session 37 logged out. Waiting for processes to exit.
Jan 21 11:08:10 np0005590810 systemd-logind[795]: Removed session 37.
Jan 21 11:08:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 21 11:08:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:10 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 21 11:08:10 np0005590810 ceph-mgr[74671]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 21 11:08:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:11 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198001cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:11 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 21 11:08:12 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 21 11:08:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 21 11:08:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 21 11:08:12 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 21 11:08:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:12 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:13 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/160813 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:08:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:13 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91980027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 31 B/s, 2 objects/s recovering
Jan 21 11:08:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 21 11:08:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 21 11:08:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 21 11:08:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:14 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:14 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 21 11:08:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 11:08:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 21 11:08:14 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 21 11:08:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:14.796Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003804215s
Jan 21 11:08:14 np0005590810 podman[97619]: 2026-01-21 16:08:14.933546934 +0000 UTC m=+8.830489206 container create 1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_bhaskara, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:14 np0005590810 podman[97619]: 2026-01-21 16:08:14.914756888 +0000 UTC m=+8.811699180 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 21 11:08:14 np0005590810 systemd[1]: Started libpod-conmon-1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5.scope.
Jan 21 11:08:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:15 np0005590810 podman[97619]: 2026-01-21 16:08:15.033067676 +0000 UTC m=+8.930009968 container init 1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_bhaskara, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 podman[97619]: 2026-01-21 16:08:15.041715161 +0000 UTC m=+8.938657433 container start 1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_bhaskara, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 podman[97619]: 2026-01-21 16:08:15.044878008 +0000 UTC m=+8.941820300 container attach 1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_bhaskara, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 systemd[1]: libpod-1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5.scope: Deactivated successfully.
Jan 21 11:08:15 np0005590810 pedantic_bhaskara[97902]: 472 0
Jan 21 11:08:15 np0005590810 podman[97619]: 2026-01-21 16:08:15.048292442 +0000 UTC m=+8.945234714 container died 1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_bhaskara, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 conmon[97902]: conmon 1e7714fd64a96ef81fdf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5.scope/container/memory.events
Jan 21 11:08:15 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8463ee0021699bf5179318afc08c85a5033e73fbf805af548d213e75d4ca98b2-merged.mount: Deactivated successfully.
Jan 21 11:08:15 np0005590810 podman[97619]: 2026-01-21 16:08:15.088521266 +0000 UTC m=+8.985463548 container remove 1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_bhaskara, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 systemd[1]: libpod-conmon-1e7714fd64a96ef81fdf0ebd9071c08250169b77006f09fb76e0da752658c4d5.scope: Deactivated successfully.
Jan 21 11:08:15 np0005590810 podman[97919]: 2026-01-21 16:08:15.167358484 +0000 UTC m=+0.050948654 container create 57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e (image=quay.io/ceph/grafana:10.4.0, name=cool_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 systemd[1]: Started libpod-conmon-57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e.scope.
Jan 21 11:08:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:15 np0005590810 podman[97919]: 2026-01-21 16:08:15.146425082 +0000 UTC m=+0.030015292 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 21 11:08:15 np0005590810 podman[97919]: 2026-01-21 16:08:15.24485415 +0000 UTC m=+0.128444360 container init 57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e (image=quay.io/ceph/grafana:10.4.0, name=cool_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 podman[97919]: 2026-01-21 16:08:15.252856005 +0000 UTC m=+0.136446185 container start 57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e (image=quay.io/ceph/grafana:10.4.0, name=cool_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 cool_fermat[97935]: 472 0
Jan 21 11:08:15 np0005590810 podman[97919]: 2026-01-21 16:08:15.256447036 +0000 UTC m=+0.140037246 container attach 57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e (image=quay.io/ceph/grafana:10.4.0, name=cool_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 podman[97919]: 2026-01-21 16:08:15.257036554 +0000 UTC m=+0.140626734 container died 57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e (image=quay.io/ceph/grafana:10.4.0, name=cool_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 systemd[1]: libpod-57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e.scope: Deactivated successfully.
Jan 21 11:08:15 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f0628a2c77dd4b5b95e24c54e2c1b24039b445ef289a0052898d524301a5d256-merged.mount: Deactivated successfully.
Jan 21 11:08:15 np0005590810 podman[97919]: 2026-01-21 16:08:15.307872463 +0000 UTC m=+0.191462643 container remove 57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e (image=quay.io/ceph/grafana:10.4.0, name=cool_fermat, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:15 np0005590810 systemd[1]: libpod-conmon-57377bba6fa8ed5ecfec198f495b9b9390029b6fe458a5e0b568532902d5fe3e.scope: Deactivated successfully.
Jan 21 11:08:15 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:15 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:15 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:15 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:15 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:15 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 21 11:08:15 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 11:08:15 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:15 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:15 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 21 11:08:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 11:08:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 21 11:08:15 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 21 11:08:15 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 3523bed5-e34b-4467-b378-f3ff776e7700 (Global Recovery Event) in 5 seconds
Jan 21 11:08:15 np0005590810 systemd[1]: Starting Ceph grafana.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:16 np0005590810 podman[98073]: 2026-01-21 16:08:16.20940438 +0000 UTC m=+0.048613103 container create c7b256022c9d0ef0c6be3f0e958a6963d34737af722d182f28ce54bc60120280 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 21 11:08:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0e70969c431154804ce3bc79e6dfdd0ccb46bbd29f334538dfccad838075e1/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0e70969c431154804ce3bc79e6dfdd0ccb46bbd29f334538dfccad838075e1/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0e70969c431154804ce3bc79e6dfdd0ccb46bbd29f334538dfccad838075e1/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0e70969c431154804ce3bc79e6dfdd0ccb46bbd29f334538dfccad838075e1/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0e70969c431154804ce3bc79e6dfdd0ccb46bbd29f334538dfccad838075e1/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:16 np0005590810 podman[98073]: 2026-01-21 16:08:16.186432595 +0000 UTC m=+0.025641318 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 21 11:08:16 np0005590810 podman[98073]: 2026-01-21 16:08:16.29224744 +0000 UTC m=+0.131456183 container init c7b256022c9d0ef0c6be3f0e958a6963d34737af722d182f28ce54bc60120280 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:16 np0005590810 podman[98073]: 2026-01-21 16:08:16.306566729 +0000 UTC m=+0.145775452 container start c7b256022c9d0ef0c6be3f0e958a6963d34737af722d182f28ce54bc60120280 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:16 np0005590810 bash[98073]: c7b256022c9d0ef0c6be3f0e958a6963d34737af722d182f28ce54bc60120280
Jan 21 11:08:16 np0005590810 systemd[1]: Started Ceph grafana.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:16 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev a731cdda-06df-4cfd-83c9-1fe2b40340e3 (Updating grafana deployment (+1 -> 1))
Jan 21 11:08:16 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event a731cdda-06df-4cfd-83c9-1fe2b40340e3 (Updating grafana deployment (+1 -> 1)) in 11 seconds
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 208433a7-6027-4d80-8f80-b24caa66bb33 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.henxfu on compute-0
Jan 21 11:08:16 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.henxfu on compute-0
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.505326464Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-21T16:08:16Z
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.505818179Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.50585985Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.505884441Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.505908852Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.505931913Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.505956053Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.505980924Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506005775Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506028636Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506051346Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506074187Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506098888Z level=info msg=Target target=[all]
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506133819Z level=info msg="Path Home" path=/usr/share/grafana
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.50616613Z level=info msg="Path Data" path=/var/lib/grafana
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506191261Z level=info msg="Path Logs" path=/var/log/grafana
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506224602Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506267823Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=settings t=2026-01-21T16:08:16.506293954Z level=info msg="App mode production"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=sqlstore t=2026-01-21T16:08:16.506694766Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=sqlstore t=2026-01-21T16:08:16.506753267Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.50748868Z level=info msg="Starting DB migrations"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.508992107Z level=info msg="Executing migration" id="create migration_log table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.51010923Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.117393ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.513149504Z level=info msg="Executing migration" id="create user table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.513902517Z level=info msg="Migration successfully executed" id="create user table" duration=753.993µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.515996901Z level=info msg="Executing migration" id="add unique index user.login"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.516764885Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=735.633µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.52020002Z level=info msg="Executing migration" id="add unique index user.email"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.520928602Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=728.492µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.523290605Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.524008086Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=717.471µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.526535224Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.527473373Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=938.299µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.529048282Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.531186397Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.136095ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.534114867Z level=info msg="Executing migration" id="create user table v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.534937632Z level=info msg="Migration successfully executed" id="create user table v2" duration=823.184µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.536839711Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.53748137Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=641.8µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.539375178Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.539980587Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=615.179µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.542382531Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.542771013Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=387.922µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.544601098Z level=info msg="Executing migration" id="Drop old table user_v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.545188097Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=588.189µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.54694293Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.54790906Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=965.99µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.549782417Z level=info msg="Executing migration" id="Update user table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.54987482Z level=info msg="Migration successfully executed" id="Update user table charset" duration=92.813µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.551760577Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.552754108Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=993.941µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.554635536Z level=info msg="Executing migration" id="Add missing user data"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.554862623Z level=info msg="Migration successfully executed" id="Add missing user data" duration=219.096µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.556856145Z level=info msg="Executing migration" id="Add is_disabled column to user"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.557831404Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=975.2µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.559459224Z level=info msg="Executing migration" id="Add index user.login/user.email"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.560105933Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=645.749µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.562197268Z level=info msg="Executing migration" id="Add is_service_account column to user"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.563115386Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=995.33µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.5648541Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.571763661Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.908661ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.574633489Z level=info msg="Executing migration" id="Add uid column to user"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.576115205Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.60453ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.580386976Z level=info msg="Executing migration" id="Update uid column values for users"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.581398637Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=1.040232ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.585350998Z level=info msg="Executing migration" id="Add unique index user_uid"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.586308277Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=958.88µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.589117593Z level=info msg="Executing migration" id="create temp user table v1-7"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.58998612Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=868.607µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.593346874Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.594779987Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.438214ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.597447939Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.59812482Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=676.601µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.600984288Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.601798733Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=815.066µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.605120364Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.606311621Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.196257ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.609514929Z level=info msg="Executing migration" id="Update temp_user table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.609621293Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=110.473µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.612039546Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.613319025Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.284529ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.615112851Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.615789062Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=676.501µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.618543496Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.61932674Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=783.613µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.621056353Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.621762944Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=707.101µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.623859369Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.626640574Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.780965ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.62847836Z level=info msg="Executing migration" id="create temp_user v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.629209803Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=731.353µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.630880994Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.631550195Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=668.961µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.633838235Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.634469864Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=628.879µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.636172616Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.636825416Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=652.95µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.638812727Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.639460587Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=647.52µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.641680585Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.642064907Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=383.442µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.643507972Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.644015466Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=507.964µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.645841583Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.646211045Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=369.542µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.647882915Z level=info msg="Executing migration" id="create star table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.648503615Z level=info msg="Migration successfully executed" id="create star table" duration=618.76µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.651194317Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.651877148Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=683.261µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.65389744Z level=info msg="Executing migration" id="create org table v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.65454393Z level=info msg="Migration successfully executed" id="create org table v1" duration=646.36µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.656708586Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.657354676Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=646.31µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.660428391Z level=info msg="Executing migration" id="create org_user table v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.661016258Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=586.747µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.664506545Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.665332061Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=824.216µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.667417774Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.668113956Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=695.942µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.670617313Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.672300794Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.683151ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.674578185Z level=info msg="Executing migration" id="Update org table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.674643767Z level=info msg="Migration successfully executed" id="Update org table charset" duration=66.592µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.676908416Z level=info msg="Executing migration" id="Update org_user table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.676971778Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=63.842µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.680129165Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.680575898Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=451.723µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.68290216Z level=info msg="Executing migration" id="create dashboard table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.684391465Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.489525ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.687504911Z level=info msg="Executing migration" id="add index dashboard.account_id"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.688392798Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=888.467µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.691509244Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.69239963Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=904.787µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.695188836Z level=info msg="Executing migration" id="create dashboard_tag table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.696544798Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.356162ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.699375735Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.700121017Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=745.462µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.702637775Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.703414888Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=777.363µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.706266276Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.710816426Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.54817ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.713485108Z level=info msg="Executing migration" id="create dashboard v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.714260311Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=775.904µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.7164816Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.717119378Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=638.039µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.720790141Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.721534284Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=745.143µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.7246495Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.72498562Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=333.32µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.727063374Z level=info msg="Executing migration" id="drop table dashboard_v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.728615331Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.554907ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.730561651Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.730628474Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=67.442µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.732615004Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.734185812Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.570478ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.736040719Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.737513264Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.472505ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.73935205Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.740836496Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.483626ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.743765656Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.745278392Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.526967ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.747244763Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.749176992Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.930619ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.751108961Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.752010719Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=902.898µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.754064532Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.754845866Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=781.575µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.757442825Z level=info msg="Executing migration" id="Update dashboard table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.757471956Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.651µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.760389766Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.760416377Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.411µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.762852361Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.764866403Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.013321ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.766873495Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.768394591Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.520696ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.770414233Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.772714123Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.29986ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.77456043Z level=info msg="Executing migration" id="Add column uid in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.776764988Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.203848ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.778501451Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.778740608Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=239.047µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.780502583Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.781436381Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=935.268µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.783605788Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.784585157Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=979.769µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.788300662Z level=info msg="Executing migration" id="Update dashboard title length"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.788344333Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=46.121µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.790027374Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.79085993Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=832.056µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.793271504Z level=info msg="Executing migration" id="create dashboard_provisioning"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.793906173Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=634.309µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.796118781Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.800688652Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.566581ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.802974812Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.80357808Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=600.027µs
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.80685245Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.80748735Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=634.77µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.809869303Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.811117472Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.247158ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.813809984Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.814201216Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=393.612µs
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.815789884Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.816452125Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=661.921µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.818440446Z level=info msg="Executing migration" id="Add check_sum column"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.819989354Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.548408ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.823859812Z level=info msg="Executing migration" id="Add index for dashboard_title"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.824526302Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=666.14µs
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.826178713Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.826343278Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=163.825µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.82870443Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.828842034Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=139.684µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.830828396Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Jan 21 11:08:16 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.831801175Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=972.229µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.834743925Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.836310254Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.566479ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.838184961Z level=info msg="Executing migration" id="create data_source table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.838940955Z level=info msg="Migration successfully executed" id="create data_source table" duration=755.454µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.841560025Z level=info msg="Executing migration" id="add index data_source.account_id"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.84236908Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=807.785µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.844583628Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.845385403Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=801.094µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.847354553Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.848002763Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=648.4µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.849607682Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.850269042Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=661.221µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.851880221Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.856213625Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.332223ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.858122073Z level=info msg="Executing migration" id="create data_source table v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.858868976Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=748.903µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.860555287Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.861201607Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=646.24µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.862740324Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.863392064Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=651.39µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.865525809Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.866051836Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=524.257µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.867670216Z level=info msg="Executing migration" id="Add column with_credentials"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.869408869Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.734803ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.87108232Z level=info msg="Executing migration" id="Add secure json data column"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.873002979Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.920869ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.874927718Z level=info msg="Executing migration" id="Update data_source table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.874964229Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=38.591µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.876757274Z level=info msg="Executing migration" id="Update initial version to 1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.876928779Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=171.625µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.879353173Z level=info msg="Executing migration" id="Add read_only data column"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.881001235Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.647461ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.882686976Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.88283804Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=150.924µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.884480401Z level=info msg="Executing migration" id="Update json_data with nulls"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.884623586Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=143.145µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.88738465Z level=info msg="Executing migration" id="Add uid column"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.889041451Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.656431ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.890757433Z level=info msg="Executing migration" id="Update uid value"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.890912748Z level=info msg="Migration successfully executed" id="Update uid value" duration=154.505µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.89257661Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.893222699Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=644.739µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.894833609Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.895457938Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=623.469µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.897455519Z level=info msg="Executing migration" id="create api_key table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.89814702Z level=info msg="Migration successfully executed" id="create api_key table" duration=691.311µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.900507223Z level=info msg="Executing migration" id="add index api_key.account_id"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.901261446Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=751.023µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.903592017Z level=info msg="Executing migration" id="add index api_key.key"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.9043442Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=748.523µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.906525397Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.907171466Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=645.629µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.909268371Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.90989932Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=631.079µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.911424477Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.912068427Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=644.5µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.913441409Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.914105239Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=663.72µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.916021408Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.922848718Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.82663ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.924690204Z level=info msg="Executing migration" id="create api_key table v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.925576331Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=885.987µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.927218581Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.92814289Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=923.919µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.930025547Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.930953126Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=928.149µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.932501513Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.933526525Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.024772ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.936737204Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.937159217Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=421.843µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.939117266Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.939849179Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=730.133µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.941937743Z level=info msg="Executing migration" id="Update api_key table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.941965204Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=27.881µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.943736408Z level=info msg="Executing migration" id="Add expires to api_key table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.94675478Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.017232ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.948972249Z level=info msg="Executing migration" id="Add service account foreign key"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.95196295Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.990141ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.95389053Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.954084786Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=194.596µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.955895401Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.959275255Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.379824ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.961936206Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.964882997Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.948321ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.96661796Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.967520328Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=902.208µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.969110626Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.969845259Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=734.333µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.971703875Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.972653485Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=949.3µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.974387358Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.975333227Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=945.359µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.977880195Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.978723531Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=843.736µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.98065163Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.982670843Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=2.021213ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.984699095Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.984755926Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=57.071µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.986300603Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.986324174Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.101µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.988200472Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.990222784Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.022102ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.991761961Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.993836355Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.073864ms
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.996179877Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.996255859Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=77.972µs
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.997557268Z level=info msg="Executing migration" id="create quota table v1"
Jan 21 11:08:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:16.998127386Z level=info msg="Migration successfully executed" id="create quota table v1" duration=569.208µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.000471148Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.001076257Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=604.669µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.00282258Z level=info msg="Executing migration" id="Update quota table charset"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.002848901Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=27.241µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.00444682Z level=info msg="Executing migration" id="create plugin_setting table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.005043219Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=596.379µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.006857424Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.007503754Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=646.82µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.009424303Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.011456935Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.032342ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.013001393Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.013024413Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=23.231µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.014583451Z level=info msg="Executing migration" id="create session table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.015291763Z level=info msg="Migration successfully executed" id="create session table" duration=706.241µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.017391497Z level=info msg="Executing migration" id="Drop old table playlist table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.017468369Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=76.742µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.019505831Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.019577554Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=72.302µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.021492063Z level=info msg="Executing migration" id="create playlist table v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.022111641Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=617.158µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.024294289Z level=info msg="Executing migration" id="create playlist item table v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.024845045Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=551.606µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.026728303Z level=info msg="Executing migration" id="Update playlist table charset"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.026751984Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=24.031µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.028193228Z level=info msg="Executing migration" id="Update playlist_item table charset"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.028215778Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=22.97µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.029583671Z level=info msg="Executing migration" id="Add playlist column created_at"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.031729477Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.147295ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.033372087Z level=info msg="Executing migration" id="Add playlist column updated_at"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.035545224Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.172358ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.037260786Z level=info msg="Executing migration" id="drop preferences table v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.037334458Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=73.702µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.039050511Z level=info msg="Executing migration" id="drop preferences table v3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.039130973Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=80.722µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.041567618Z level=info msg="Executing migration" id="create preferences table v3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.042264719Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=697.741µs
Jan 21 11:08:17 np0005590810 podman[98202]: 2026-01-21 16:08:17.042822417 +0000 UTC m=+0.046151877 container create 9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb (image=quay.io/ceph/haproxy:2.3, name=magical_sinoussi)
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.0448962Z level=info msg="Executing migration" id="Update preferences table charset"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.044918391Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=23.141µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.04717944Z level=info msg="Executing migration" id="Add column team_id in preferences"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.049417419Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.239299ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.054317159Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.054512905Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=198.166µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.057369483Z level=info msg="Executing migration" id="Add column week_start in preferences"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.059673993Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.30411ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.061146278Z level=info msg="Executing migration" id="Add column preferences.json_data"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.063377947Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.231319ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.064956895Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.065011137Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=57.152µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.068480134Z level=info msg="Executing migration" id="Add preferences index org_id"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.069306009Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=825.755µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.071717813Z level=info msg="Executing migration" id="Add preferences index user_id"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.072688222Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=968.709µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.076026895Z level=info msg="Executing migration" id="create alert table v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.077036246Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.008871ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.079616505Z level=info msg="Executing migration" id="add index alert org_id & id "
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.080511913Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=896.848µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.083613377Z level=info msg="Executing migration" id="add index alert state"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.084271088Z level=info msg="Migration successfully executed" id="add index alert state" duration=656.96µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.086339411Z level=info msg="Executing migration" id="add index alert dashboard_id"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.087000301Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=660.43µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.089028114Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.089592031Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=564.357µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.091340435Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.092056716Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=716.091µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.094016747Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.094813991Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=795.413µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.096387599Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Jan 21 11:08:17 np0005590810 systemd[1]: Started libpod-conmon-9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb.scope.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.103672072Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.284123ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.106008684Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.106809289Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=802.315µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.108554623Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.109331066Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=776.553µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.112214414Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.112943437Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=733.323µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.115222257Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.1166121Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.391053ms
Jan 21 11:08:17 np0005590810 podman[98202]: 2026-01-21 16:08:17.023105352 +0000 UTC m=+0.026434842 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.119359484Z level=info msg="Executing migration" id="create alert_notification table v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.120639973Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.284369ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.122883351Z level=info msg="Executing migration" id="Add column is_default"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.125828072Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.944301ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.12838472Z level=info msg="Executing migration" id="Add column frequency"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/160817 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.133491057Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.105917ms
Jan 21 11:08:17 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.135979214Z level=info msg="Executing migration" id="Add column send_reminder"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.140482501Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.503017ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.142499623Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.147028422Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.526029ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.148951481Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.150451028Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.503227ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.153396308Z level=info msg="Executing migration" id="Update alert table charset"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.153515701Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=119.873µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.156690278Z level=info msg="Executing migration" id="Update alert_notification table charset"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.156832903Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=143.905µs
Jan 21 11:08:17 np0005590810 podman[98202]: 2026-01-21 16:08:17.159581037 +0000 UTC m=+0.162910547 container init 9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb (image=quay.io/ceph/haproxy:2.3, name=magical_sinoussi)
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.160855296Z level=info msg="Executing migration" id="create notification_journal table v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.161892018Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.036362ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.165155098Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.166358505Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.206447ms
Jan 21 11:08:17 np0005590810 podman[98202]: 2026-01-21 16:08:17.17010542 +0000 UTC m=+0.173434890 container start 9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb (image=quay.io/ceph/haproxy:2.3, name=magical_sinoussi)
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.170615335Z level=info msg="Executing migration" id="drop alert_notification_journal"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.171706899Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.091654ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.173973168Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Jan 21 11:08:17 np0005590810 podman[98202]: 2026-01-21 16:08:17.173986839 +0000 UTC m=+0.177316329 container attach 9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb (image=quay.io/ceph/haproxy:2.3, name=magical_sinoussi)
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.175029701Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.053053ms
Jan 21 11:08:17 np0005590810 magical_sinoussi[98218]: 0 0
Jan 21 11:08:17 np0005590810 systemd[1]: libpod-9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb.scope: Deactivated successfully.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.178526059Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Jan 21 11:08:17 np0005590810 podman[98202]: 2026-01-21 16:08:17.179510748 +0000 UTC m=+0.182840208 container died 9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb (image=quay.io/ceph/haproxy:2.3, name=magical_sinoussi)
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.180503479Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.97773ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.183076658Z level=info msg="Executing migration" id="Add for to alert table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.186206224Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.128166ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.189590018Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.192575589Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.981931ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.194482448Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.194657223Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=175.755µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.196831609Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.197713497Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=878.688µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.200393989Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.201185223Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=791.403µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.203538016Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.206378763Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.838807ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.20858468Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.208636212Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=54.092µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.211657464Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.21250431Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=848.276µs
Jan 21 11:08:17 np0005590810 systemd[1]: var-lib-containers-storage-overlay-4a0364868d8b255524c8167849d333b69f29b6cb9b17e3e5889bdc1a526e492a-merged.mount: Deactivated successfully.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.214316916Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.215063718Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=744.212µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.220413593Z level=info msg="Executing migration" id="Drop old annotation table v4"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.220573288Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=163.275µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.223671942Z level=info msg="Executing migration" id="create annotation table v5"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.224591251Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=923.269µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.228539602Z level=info msg="Executing migration" id="add index annotation 0 v3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.229315855Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=772.953µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.233221785Z level=info msg="Executing migration" id="add index annotation 1 v3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.23433704Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.115645ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.237848207Z level=info msg="Executing migration" id="add index annotation 2 v3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.239186589Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.335201ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.241719946Z level=info msg="Executing migration" id="add index annotation 3 v3"
Jan 21 11:08:17 np0005590810 podman[98202]: 2026-01-21 16:08:17.242039956 +0000 UTC m=+0.245369416 container remove 9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb (image=quay.io/ceph/haproxy:2.3, name=magical_sinoussi)
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.242870562Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.150256ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.245054398Z level=info msg="Executing migration" id="add index annotation 4 v3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.245885443Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=830.745µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.249414732Z level=info msg="Executing migration" id="Update annotation table charset"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.249436412Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.61µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.251009511Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.254156508Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.145757ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.256643614Z level=info msg="Executing migration" id="Drop category_id index"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.25751465Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=871.136µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.260997627Z level=info msg="Executing migration" id="Add column tags to annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.265132964Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.128437ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.268369373Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.269089655Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=720.962µs
Jan 21 11:08:17 np0005590810 systemd[1]: libpod-conmon-9382ef97e82a36f3cfb2828ce11ce2f375e80080c4eac2116eba04d89ee520bb.scope: Deactivated successfully.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.271779558Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.272656825Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=876.917µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.278178974Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.279094713Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=917.989µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.283113895Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.292316438Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.198283ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.294520475Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.295253748Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=734.163µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.298676703Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.299782776Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.108883ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.30282309Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.303121399Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=299.409µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.30545304Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.306021948Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=568.858µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.307972057Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.308131542Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=159.215µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.313858858Z level=info msg="Executing migration" id="Add created time to annotation table"
Jan 21 11:08:17 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:17 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 108 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=10.112974167s) [2] r=-1 lpr=108 pi=[67,108)/1 crt=56'1081 mlcod 0'0 active pruub 225.761322021s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:17 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 108 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=108 pruub=10.112935066s) [2] r=-1 lpr=108 pi=[67,108)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 225.761322021s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.322811213Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=8.948974ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.324930198Z level=info msg="Executing migration" id="Add updated time to annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.328158347Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.22705ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.32990315Z level=info msg="Executing migration" id="Add index for created in annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.330634712Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=734.932µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.334205102Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.336491282Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=2.28687ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.343194578Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.343619191Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=432.313µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.345900071Z level=info msg="Executing migration" id="Add epoch_end column"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.350295655Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.394594ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.352758161Z level=info msg="Executing migration" id="Add index for epoch_end"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.353646958Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=889.667µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.356823096Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.357017992Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=196.536µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.358659242Z level=info msg="Executing migration" id="Move region to single row"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.358960741Z level=info msg="Migration successfully executed" id="Move region to single row" duration=305.149µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.360730705Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.361637913Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=907.998µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.367340968Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.368642309Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.30482ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.385915267Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.387136625Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.225758ms
Jan 21 11:08:17 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:17 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.431190716Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.433158987Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.971911ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.612796235Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.614038984Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.246388ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.641072692Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.642339422Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.272539ms
Jan 21 11:08:17 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:17 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:17 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:17 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.748120125Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.749027713Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=912.978µs
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:17 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.857056696Z level=info msg="Executing migration" id="create test_data table"
Jan 21 11:08:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.85880269Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.752715ms
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.916647183Z level=info msg="Executing migration" id="create dashboard_version table v1"
Jan 21 11:08:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:17.918525631Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.883488ms
Jan 21 11:08:17 np0005590810 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.henxfu for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: Deploying daemon haproxy.rgw.default.compute-0.henxfu on compute-0
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.078375133Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.079793267Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.422035ms
Jan 21 11:08:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 346 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.291342564Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Jan 21 11:08:18 np0005590810 podman[98366]: 2026-01-21 16:08:18.292595913 +0000 UTC m=+0.111598015 container create b976f8b136eaedf5f797273e9c777a072768374a897a247dc57a476110260c4d (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-rgw-default-compute-0-henxfu)
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.292801709Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.463536ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.29613623Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.296374268Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=238.708µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.300473574Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.301020711Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=557.387µs
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.304087264Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.304174987Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=89.563µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.3062223Z level=info msg="Executing migration" id="create team table"
Jan 21 11:08:18 np0005590810 podman[98366]: 2026-01-21 16:08:18.212393932 +0000 UTC m=+0.031396064 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.30721138Z level=info msg="Migration successfully executed" id="create team table" duration=989.56µs
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.311358078Z level=info msg="Executing migration" id="add index team.org_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.31240564Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.048142ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.315543235Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.316926208Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.386983ms
Jan 21 11:08:18 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 109 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=109) [2]/[0] r=0 lpr=109 pi=[67,109)/1 crt=56'1081 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:18 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 109 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=67/68 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=109) [2]/[0] r=0 lpr=109 pi=[67,109)/1 crt=56'1081 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797fbd46ce29f9595257d4f750d72bbaac52d1ac69819f992d9d863d3d134457/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.385169551Z level=info msg="Executing migration" id="Add column uid in team"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.389668929Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.500178ms
Jan 21 11:08:18 np0005590810 podman[98366]: 2026-01-21 16:08:18.397053525 +0000 UTC m=+0.216055697 container init b976f8b136eaedf5f797273e9c777a072768374a897a247dc57a476110260c4d (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-rgw-default-compute-0-henxfu)
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.397876741Z level=info msg="Executing migration" id="Update uid column values in team"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.398284823Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=413.193µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.400676287Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.401652357Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=975.93µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:18 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:18 np0005590810 podman[98366]: 2026-01-21 16:08:18.403173733 +0000 UTC m=+0.222175845 container start b976f8b136eaedf5f797273e9c777a072768374a897a247dc57a476110260c4d (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-rgw-default-compute-0-henxfu)
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.405135693Z level=info msg="Executing migration" id="create team member table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.406704592Z level=info msg="Migration successfully executed" id="create team member table" duration=1.574208ms
Jan 21 11:08:18 np0005590810 bash[98366]: b976f8b136eaedf5f797273e9c777a072768374a897a247dc57a476110260c4d
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.409985182Z level=info msg="Executing migration" id="add index team_member.org_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.411158578Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.176356ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.413884251Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Jan 21 11:08:18 np0005590810 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.henxfu for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.414864452Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=976.62µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-rgw-default-compute-0-henxfu[98381]: [NOTICE] 020/160818 (2) : New worker #1 (4) forked
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.418745441Z level=info msg="Executing migration" id="add index team_member.team_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.420721011Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.98113ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.424985712Z level=info msg="Executing migration" id="Add column email to team table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.429780229Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.794357ms
Jan 21 11:08:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.43271728Z level=info msg="Executing migration" id="Add column external to team_member table"
Jan 21 11:08:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.003000094s ======
Jan 21 11:08:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:18.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000094s
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.438435195Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.677295ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.441740746Z level=info msg="Executing migration" id="Add column permission to team_member table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.4454512Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.711265ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.447457971Z level=info msg="Executing migration" id="create dashboard acl table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.448357909Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=899.998µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.451770734Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.452599138Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=828.354µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.455020843Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.45591324Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=893.637µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.459626774Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.460478791Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=849.427µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.462715579Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.463469713Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=755.814µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.465865316Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.46668621Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=820.644µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.468833577Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.471406126Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=2.565569ms
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.640215662Z level=info msg="Executing migration" id="add index dashboard_permission"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.641620756Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.409693ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.656896443Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.657656457Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=792.535µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.65970416Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.66003302Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=326.29µs
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.662162786Z level=info msg="Executing migration" id="create tag table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.663166406Z level=info msg="Migration successfully executed" id="create tag table" duration=1.00689ms
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.667806219Z level=info msg="Executing migration" id="add index tag.key_value"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.670470771Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=2.420384ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.674003138Z level=info msg="Executing migration" id="create login attempt table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.675851666Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.848167ms
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.679217648Z level=info msg="Executing migration" id="add index login_attempt.username"
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.681076216Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.858077ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.684555123Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.68644424Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.891978ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.689321968Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Jan 21 11:08:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:18 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.unukye on compute-2
Jan 21 11:08:18 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.unukye on compute-2
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.712991134Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=23.662285ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.785024724Z level=info msg="Executing migration" id="create login_attempt v2"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.786819288Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.796055ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.79076523Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.792915175Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=2.145486ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.796678271Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.797398542Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=721.211µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.801567081Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.802599712Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.033022ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.805095389Z level=info msg="Executing migration" id="create user auth table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.805955665Z level=info msg="Migration successfully executed" id="create user auth table" duration=861.106µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.808451471Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.810371271Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.92086ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.837803472Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.838116471Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=319.58µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.841108743Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.847500519Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.388495ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.849713157Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.855270758Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.55616ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.857880208Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.862922272Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.047705ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.865388237Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.869277937Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.89001ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.871153914Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.872094594Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=942.05µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.874504027Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.879348466Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.843178ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.881514732Z level=info msg="Executing migration" id="create server_lock table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.882639167Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.122594ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.885503395Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.886537666Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.039112ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.890302521Z level=info msg="Executing migration" id="create user auth token table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.891470548Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.169507ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.89448331Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.895818511Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.338ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.898378239Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.899585086Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.206557ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.902482085Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.903766084Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.283579ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.906387174Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.912663387Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.274093ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.914835353Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.915824554Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=989.221µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.919505297Z level=info msg="Executing migration" id="create cache_data table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.920623801Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.118174ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.923091497Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.924291424Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.199527ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.926557193Z level=info msg="Executing migration" id="create short_url table v1"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.927692318Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.135425ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.930118453Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.931328499Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.211086ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.935186358Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.935309902Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=124.774µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.937584531Z level=info msg="Executing migration" id="delete alert_definition table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.937693855Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=109.084µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.939524291Z level=info msg="Executing migration" id="recreate alert_definition table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.940726138Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.202067ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.943494093Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.944611087Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.117935ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.947853177Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.948891418Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.038811ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.951634492Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.951725135Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=91.463µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.954769038Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.95611154Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.344392ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.958227065Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.959379949Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.157325ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.961691521Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.962883457Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.191866ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.967545111Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.968863381Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.32002ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.972192123Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.978506437Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.311294ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.981393875Z level=info msg="Executing migration" id="drop alert_definition table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.98285524Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.463915ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.985197041Z level=info msg="Executing migration" id="delete alert_definition_version table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.985377387Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=183.736µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.98742871Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.988342448Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=914.329µs
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.990736371Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.991745052Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.008761ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.994522928Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.996213079Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.696212ms
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.998133059Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Jan 21 11:08:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:18.99818595Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=53.291µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.000121659Z level=info msg="Executing migration" id="drop alert_definition_version table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.001087709Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=966.21µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.003347998Z level=info msg="Executing migration" id="create alert_instance table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.004453722Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.105094ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.006525846Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.007628549Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.104643ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.009892939Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: Deploying daemon haproxy.rgw.default.compute-2.unukye on compute-2
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.010798187Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=903.158µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.01351347Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.017867714Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.351384ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.019899146Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.020851104Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=951.018µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.0226369Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.023380913Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=743.643µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.025535699Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.050560176Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.016767ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.052801265Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.074083237Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=21.249731ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.081140584Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.082454504Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.31523ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.112049502Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.113337711Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.29098ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.153342278Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.158214007Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.875499ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.167551894Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.172186045Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.631842ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.174220878Z level=info msg="Executing migration" id="create alert_rule table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.175122885Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=901.547µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.179354966Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.180209352Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=853.786µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.183166932Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.184002088Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=834.446µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.186619338Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.188202307Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.587449ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.194596112Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.194725756Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=134.814µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.196870123Z level=info msg="Executing migration" id="add column for to alert_rule"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.202074742Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.202289ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.204170717Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.208945803Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.774216ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.211532602Z level=info msg="Executing migration" id="add column labels to alert_rule"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.216435023Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.900141ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.227599415Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.22875922Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.161805ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.230827243Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.231714341Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=887.448µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.233695821Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.238679925Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.978494ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.241521392Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.24602253Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.498278ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.24832052Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.24929104Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=972.12µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.251923741Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.256319955Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.394914ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.258616707Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.263096314Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.477707ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.265151466Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.265203568Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=53.962µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.267339993Z level=info msg="Executing migration" id="create alert_rule_version table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.268508379Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.169516ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.271283085Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.272292656Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.008971ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.274415391Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.275377831Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=966.34µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.277748123Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.277800855Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=50.392µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.281049375Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.286994686Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.954772ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.28972914Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.294894169Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.157679ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.299286683Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.304686899Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.398316ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.307132214Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.313156659Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.016305ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.316330956Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.321892297Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.55524ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.324043772Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.324103234Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=61.702µs
Jan 21 11:08:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.327742286Z level=info msg="Executing migration" id=create_alert_configuration_table
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.329038585Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.295109ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.333825113Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Jan 21 11:08:19 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 110 pg[10.13( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=110) [0] r=0 lpr=110 pi=[65,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.340687403Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.854129ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.343824929Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.344089287Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=268.138µs
Jan 21 11:08:19 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 110 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=109/110 n=4 ec=58/49 lis/c=67/67 les/c/f=68/68/0 sis=109) [2]/[0] async=[2] r=0 lpr=109 pi=[67,109)/1 crt=56'1081 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.347662977Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.352794764Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.128787ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.355793276Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.357076875Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.289649ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.360339805Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.366130273Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.784198ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.369321151Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.37056413Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.244348ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.375575043Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.377096059Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.526536ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.387434617Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.392967497Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=5.533059ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.396804944Z level=info msg="Executing migration" id="create provenance_type table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.398033282Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.225077ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.40192065Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.403107427Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.187287ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.406292695Z level=info msg="Executing migration" id="create alert_image table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.407991297Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.703382ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.412511306Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.413587848Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.076392ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.417296882Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.417369995Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=74.283µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.421843471Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.422956196Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.112565ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.426945068Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.428611189Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.664851ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.430992342Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.431473127Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.434632484Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.435196181Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=565.047µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.437047458Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.438162652Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.114414ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.440099512Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.446314052Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.213479ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.449870001Z level=info msg="Executing migration" id="create library_element table v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.450958745Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.088844ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.454422531Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.456406812Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.995541ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.458918169Z level=info msg="Executing migration" id="create library_element_connection table v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.459962321Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.044453ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.46221831Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.463401506Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.184636ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.465994215Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.467061058Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.063253ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.47004832Z level=info msg="Executing migration" id="increase max description length to 2048"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.470074641Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=27.461µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.472155954Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.472258567Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=99.513µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.474375312Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.474834986Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=460.514µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.477840259Z level=info msg="Executing migration" id="create data_keys table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.479309754Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.466865ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.483289476Z level=info msg="Executing migration" id="create secrets table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.484184163Z level=info msg="Migration successfully executed" id="create secrets table" duration=895.167µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.487130034Z level=info msg="Executing migration" id="rename data_keys name column to id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.513973687Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=26.837312ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.51668928Z level=info msg="Executing migration" id="add name column into data_keys"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.522017913Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.330453ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.523927982Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.524129619Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=201.016µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.526222753Z level=info msg="Executing migration" id="rename data_keys name column to label"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.554002074Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=27.769261ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.556426869Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.585660255Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.224997ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.588040968Z level=info msg="Executing migration" id="create kv_store table v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.588966707Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=925.949µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.593597409Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.594826736Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.230617ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.597106587Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.597468448Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=361.242µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.600987255Z level=info msg="Executing migration" id="create permission table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.602058088Z level=info msg="Migration successfully executed" id="create permission table" duration=1.071743ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.60504959Z level=info msg="Executing migration" id="add unique index permission.role_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.606097892Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.049221ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.609447354Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.610552239Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.105215ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.613310993Z level=info msg="Executing migration" id="create role table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.614298863Z level=info msg="Migration successfully executed" id="create role table" duration=987.64µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.617817721Z level=info msg="Executing migration" id="add column display_name"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.623421973Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.605352ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.625191768Z level=info msg="Executing migration" id="add column group_name"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.630716147Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.519659ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.633617806Z level=info msg="Executing migration" id="add index role.org_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.635155203Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.538307ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.637924448Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.639314341Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.390503ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.64190425Z level=info msg="Executing migration" id="add index role_org_id_uid"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.643522Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.624759ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.646622315Z level=info msg="Executing migration" id="create team role table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.64745298Z level=info msg="Migration successfully executed" id="create team role table" duration=830.465µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.652797374Z level=info msg="Executing migration" id="add index team_role.org_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.653810115Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.012501ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.655816377Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.656813217Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=996.42µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.65883579Z level=info msg="Executing migration" id="add index team_role.team_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.659737307Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=901.136µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.66212293Z level=info msg="Executing migration" id="create user role table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.662964086Z level=info msg="Migration successfully executed" id="create user role table" duration=841.176µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.664976967Z level=info msg="Executing migration" id="add index user_role.org_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.665923747Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=947.739µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.667955769Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.668866337Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=910.389µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.670743364Z level=info msg="Executing migration" id="add index user_role.user_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.671637432Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=894.358µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.673705355Z level=info msg="Executing migration" id="create builtin role table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.674554151Z level=info msg="Migration successfully executed" id="create builtin role table" duration=849.396µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.677653966Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.678615156Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=955.77µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.680862775Z level=info msg="Executing migration" id="add index builtin_role.name"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.681921287Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.058882ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.685084964Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.694355848Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.262844ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.696786573Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.697797474Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.011231ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.700432065Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.701322372Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=890.087µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.703460068Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.704379005Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=919.247µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.707966516Z level=info msg="Executing migration" id="add unique index role.uid"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.708830843Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=864.837µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.710918966Z level=info msg="Executing migration" id="create seed assignment table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.711611818Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=692.622µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.713950549Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.714812706Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=862.137µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.717142677Z level=info msg="Executing migration" id="add column hidden to role table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:19 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.723119681Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.976964ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.725438372Z level=info msg="Executing migration" id="permission kind migration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.731279311Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.842939ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.732703084Z level=info msg="Executing migration" id="permission attribute migration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.739005578Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.286583ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.741733021Z level=info msg="Executing migration" id="permission identifier migration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.748335734Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.602493ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.750420828Z level=info msg="Executing migration" id="add permission identifier index"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.751442Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.020811ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.75343972Z level=info msg="Executing migration" id="add permission action scope role_id index"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.754538874Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.097003ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.757490284Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.758489585Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=998.851µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.76093447Z level=info msg="Executing migration" id="create query_history table v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.761834348Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=899.208µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.763674234Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.764707436Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.033092ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.767053018Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.767251024Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=198.696µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.769013008Z level=info msg="Executing migration" id="rbac disabled migrator"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.769055739Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=43.741µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.771098221Z level=info msg="Executing migration" id="teams permissions migration"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.771639758Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=541.397µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.773682271Z level=info msg="Executing migration" id="dashboard permissions"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.774254568Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=572.647µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.775983712Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.776659432Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=675.94µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.779731466Z level=info msg="Executing migration" id="drop managed folder create actions"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.780001915Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=270.409µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.782322996Z level=info msg="Executing migration" id="alerting notification permissions"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.782870493Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=546.517µs
Jan 21 11:08:19 np0005590810 kernel: ganesha.nfsd[97837]: segfault at 50 ip 00007f922847032e sp 00007f91a5ffa210 error 4 in libntirpc.so.5.8[7f9228455000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 21 11:08:19 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[96591]: 21/01/2026 16:08:19 : epoch 6970f9dd : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180000fa0 fd 38 proxy ignored for local
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.786101112Z level=info msg="Executing migration" id="create query_history_star table v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.787289919Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.193848ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.78995491Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.791159057Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.204417ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.793530419Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.803996801Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.458322ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.806418175Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.806539149Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=92.693µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.809000994Z level=info msg="Executing migration" id="create correlation table v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.811066917Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.065263ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.814880815Z level=info msg="Executing migration" id="add index correlations.uid"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.816195284Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.31427ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.81867749Z level=info msg="Executing migration" id="add index correlations.source_uid"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.81996463Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.287ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.822423966Z level=info msg="Executing migration" id="add correlation config column"
Jan 21 11:08:19 np0005590810 systemd[1]: Started Process Core Dump (PID 98396/UID 0).
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.832137534Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.707139ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.83461845Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.836391154Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.772244ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.838928352Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.840377517Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.449076ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.842707448Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.868696875Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=25.983357ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.870941924Z level=info msg="Executing migration" id="create correlation v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.872662486Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.720892ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.875180443Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.876604117Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.427984ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.878692551Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.880025843Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.332922ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.883503578Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.88483964Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.336232ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.887365847Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.887714578Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=348.331µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.890406651Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.891519964Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.113093ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.893512426Z level=info msg="Executing migration" id="add provisioning column"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.902212523Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.699577ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.904982928Z level=info msg="Executing migration" id="create entity_events table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.906307648Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.244867ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.908378331Z level=info msg="Executing migration" id="create dashboard public config v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.909793885Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.415004ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.912457197Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.912993413Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.915150689Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.915718057Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.917916164Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.918881994Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=964.96µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.920981138Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.921895387Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=914.358µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.924083783Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.925078693Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=993.86µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.92753973Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.928606432Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.065352ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.932106339Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.933314987Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.209858ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.935139112Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.936078191Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=939.149µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.939023131Z level=info msg="Executing migration" id="Drop public config table"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.939862717Z level=info msg="Migration successfully executed" id="Drop public config table" duration=838.986µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.941784297Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.942866019Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.081743ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.944802998Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.945735137Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=932.199µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.947611744Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.948634616Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.022252ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.950652588Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.951612077Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=959.749µs
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.954783835Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.980844104Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=26.032738ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.983699952Z level=info msg="Executing migration" id="add annotations_enabled column"
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.991184041Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.489539ms
Jan 21 11:08:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.99342349Z level=info msg="Executing migration" id="add time_selection_enabled column"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:19.999876948Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.452429ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.001855228Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.002128987Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=270.358µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.003677494Z level=info msg="Executing migration" id="add share column"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.010631667Z level=info msg="Migration successfully executed" id="add share column" duration=6.948893ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.012982179Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.013294959Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=313.7µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.015211017Z level=info msg="Executing migration" id="create file table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.01625775Z level=info msg="Migration successfully executed" id="create file table" duration=1.045713ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.018507969Z level=info msg="Executing migration" id="file table idx: path natural pk"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.019598123Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.090503ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.021778639Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.023274665Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.500156ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.025449152Z level=info msg="Executing migration" id="create file_meta table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.026296887Z level=info msg="Migration successfully executed" id="create file_meta table" duration=847.915µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.028432644Z level=info msg="Executing migration" id="file table idx: path key"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.029483665Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.051541ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.031994212Z level=info msg="Executing migration" id="set path collation in file table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.032081985Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=88.843µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.033706685Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.033785378Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=79.453µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.035657845Z level=info msg="Executing migration" id="managed permissions migration"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.036212562Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=555.007µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.037854302Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.038155371Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=302.309µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.039966096Z level=info msg="Executing migration" id="RBAC action name migrator"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.041439532Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.473316ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.043488145Z level=info msg="Executing migration" id="Add UID column to playlist"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.051340215Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=7.8461ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.05343947Z level=info msg="Executing migration" id="Update uid column values in playlist"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.053645457Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=207.157µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.055633587Z level=info msg="Executing migration" id="Add index for uid in playlist"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.056916696Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.282059ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.059616769Z level=info msg="Executing migration" id="update group index for alert rules"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.059997801Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=381.652µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.061845988Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.062119907Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=274.689µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.064347215Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.065267052Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=925.567µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.068247204Z level=info msg="Executing migration" id="add action column to seed_assignment"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.075223568Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.970184ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.077203909Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.084032248Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.806679ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.086073031Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.087425832Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.353071ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.089844597Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.164294369Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=74.441022ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.166551538Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.167757316Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.206398ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.169787768Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.170856941Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.066123ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.173390439Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.200861691Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=27.463612ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.205214754Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.212838578Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.626684ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.215124868Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.215653595Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=533.397µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.217560143Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.217820081Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=300.469µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.219858593Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.220095091Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=236.498µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.221959387Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.222294318Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=334.551µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.224922678Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.225161996Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=239.678µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.226949771Z level=info msg="Executing migration" id="create folder table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.227935871Z level=info msg="Migration successfully executed" id="create folder table" duration=985.77µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.229797949Z level=info msg="Executing migration" id="Add index for parent_uid"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.23117571Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.377592ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.233724918Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.23474897Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.023152ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.236906366Z level=info msg="Executing migration" id="Update folder title length"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.236935937Z level=info msg="Migration successfully executed" id="Update folder title length" duration=30.831µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.23899582Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.240109885Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.113984ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.242179478Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.243171839Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=991.76µs
Jan 21 11:08:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 230 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.245448148Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.247407318Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.96113ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.250756471Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.251529055Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=777.264µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.253511355Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.253907627Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=397.362µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.257843529Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.259508629Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.671501ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.262427959Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.263553354Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.124755ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.271197068Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.272656702Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.466094ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.275018454Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.276534702Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.517357ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.278427369Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.279662387Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.233918ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.281548665Z level=info msg="Executing migration" id="create anon_device table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.28236573Z level=info msg="Migration successfully executed" id="create anon_device table" duration=817.605µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.284107783Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.285311631Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.206828ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.287849879Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.288977843Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.130614ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.291665116Z level=info msg="Executing migration" id="create signing_key table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.292684247Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.02212ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.295149202Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.296180434Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.031162ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.298816565Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.299896658Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.080243ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.302008813Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.302352523Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=344.45µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.304740677Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.312955188Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=8.209032ms
Jan 21 11:08:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:20.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.327005349Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.327957799Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=959.51µs
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.33092788Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.332059954Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.131154ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.334399506Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.335586762Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.187836ms
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.337260853Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.338328396Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.067353ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.341410851Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.342564266Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.153035ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.344740233Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.345784625Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.043952ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.348161578Z level=info msg="Executing migration" id="create sso_setting table"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.349184619Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.025882ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.351960194Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.353425329Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.473645ms
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.356091702Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.356447282Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=355.05µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.359352141Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.360554408Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=1.194097ms
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:08:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 111 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=109/110 n=4 ec=58/49 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=14.981303215s) [2] async=[2] r=-1 lpr=111 pi=[67,111)/1 crt=56'1081 mlcod 56'1081 active pruub 233.676406860s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.36322075Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Jan 21 11:08:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 111 pg[10.12( v 56'1081 (0'0,56'1081] local-lis/les=109/110 n=4 ec=58/49 lis/c=109/67 les/c/f=110/68/0 sis=111 pruub=14.981197357s) [2] r=-1 lpr=111 pi=[67,111)/1 crt=56'1081 mlcod 0'0 unknown NOTIFY pruub 233.676406860s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 11:08:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 111 pg[10.13( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 111 pg[10.13( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=65/65 les/c/f=66/66/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:08:20 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 111 pg[10.14( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=72/72 les/c/f=73/73/0 sis=111) [0] r=0 lpr=111 pi=[72,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.371759672Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=8.533701ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.374293799Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.38150259Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.18753ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.383470191Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.383808951Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=338.27µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=migrator t=2026-01-21T16:08:20.385775851Z level=info msg="migrations completed" performed=547 skipped=0 duration=3.876865117s
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=sqlstore t=2026-01-21T16:08:20.387022719Z level=info msg="Created default organization"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=secrets t=2026-01-21T16:08:20.389071803Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=plugin.store t=2026-01-21T16:08:20.408449077Z level=info msg="Loading plugins..."
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:20 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:08:20 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:08:20 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:08:20 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:08:20 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.pkauht on compute-2
Jan 21 11:08:20 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.pkauht on compute-2
Jan 21 11:08:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:20.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=local.finder t=2026-01-21T16:08:20.49497234Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=plugin.store t=2026-01-21T16:08:20.495011171Z level=info msg="Plugins loaded" count=55 duration=86.562884ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=query_data t=2026-01-21T16:08:20.500146049Z level=info msg="Query Service initialization"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=live.push_http t=2026-01-21T16:08:20.506870645Z level=info msg="Live Push Gateway initialization"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.migration t=2026-01-21T16:08:20.510090754Z level=info msg=Starting
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.migration t=2026-01-21T16:08:20.510509356Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.migration orgID=1 t=2026-01-21T16:08:20.510914989Z level=info msg="Migrating alerts for organisation"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.migration orgID=1 t=2026-01-21T16:08:20.511645871Z level=info msg="Alerts found to migrate" alerts=0
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.migration t=2026-01-21T16:08:20.513272921Z level=info msg="Completed alerting migration"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.state.manager t=2026-01-21T16:08:20.53377758Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=infra.usagestats.collector t=2026-01-21T16:08:20.535863484Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=provisioning.datasources t=2026-01-21T16:08:20.537061121Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=provisioning.alerting t=2026-01-21T16:08:20.547639435Z level=info msg="starting to provision alerting"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=provisioning.alerting t=2026-01-21T16:08:20.547670196Z level=info msg="finished to provision alerting"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.state.manager t=2026-01-21T16:08:20.547939164Z level=info msg="Warming state cache for startup"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.multiorg.alertmanager t=2026-01-21T16:08:20.548102439Z level=info msg="Starting MultiOrg Alertmanager"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.state.manager t=2026-01-21T16:08:20.548355777Z level=info msg="State cache has been initialized" states=0 duration=416.323µs
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ngalert.scheduler t=2026-01-21T16:08:20.548399739Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ticker t=2026-01-21T16:08:20.548428179Z level=info msg=starting first_tick=2026-01-21T16:08:30Z
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=grafanaStorageLogger t=2026-01-21T16:08:20.548986917Z level=info msg="Storage starting"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=http.server t=2026-01-21T16:08:20.551497304Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=http.server t=2026-01-21T16:08:20.552098322Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=grafana.update.checker t=2026-01-21T16:08:20.623989597Z level=info msg="Update check succeeded" duration=75.846516ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=plugins.update.checker t=2026-01-21T16:08:20.625622006Z level=info msg="Update check succeeded" duration=77.149255ms
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=provisioning.dashboard t=2026-01-21T16:08:20.639335597Z level=info msg="starting to provision dashboards"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=grafana-apiserver t=2026-01-21T16:08:20.843497907Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 21 11:08:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=grafana-apiserver t=2026-01-21T16:08:20.844081006Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 21 11:08:20 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 27 completed events
Jan 21 11:08:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:08:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 21 11:08:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 0 op/s; 20 B/s, 1 objects/s recovering
Jan 21 11:08:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:08:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:22.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:08:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:22.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: Deploying daemon keepalived.rgw.default.compute-2.pkauht on compute-2
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 21 11:08:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 112 pg[10.14( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[72,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:22 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 112 pg[10.14( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[72,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:08:22 np0005590810 systemd-coredump[98397]: Process 96611 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007f922847032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:08:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=provisioning.dashboard t=2026-01-21T16:08:22.615515239Z level=info msg="finished to provision dashboards"
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 21 11:08:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:22 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:08:22 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:08:22 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:08:22 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:08:22 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.eqdcyf on compute-0
Jan 21 11:08:22 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.eqdcyf on compute-0
Jan 21 11:08:22 np0005590810 systemd[1]: systemd-coredump@1-98396-0.service: Deactivated successfully.
Jan 21 11:08:22 np0005590810 systemd[1]: systemd-coredump@1-98396-0.service: Consumed 1.909s CPU time.
Jan 21 11:08:22 np0005590810 podman[98433]: 2026-01-21 16:08:22.772635053 +0000 UTC m=+0.028677232 container died 2c38abf31015215008bf4a63a17bab99d0f193ec9af435bb4ca0778f31a42759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:08:22 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6a613102054f9f68f6c22edefe80b177a73e5176f25e41b8d8aa05d2b4e5b86e-merged.mount: Deactivated successfully.
Jan 21 11:08:22 np0005590810 podman[98433]: 2026-01-21 16:08:22.809065245 +0000 UTC m=+0.065107414 container remove 2c38abf31015215008bf4a63a17bab99d0f193ec9af435bb4ca0778f31a42759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:08:22 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:08:22 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:08:22 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 2.005s CPU time.
Jan 21 11:08:23 np0005590810 podman[98543]: 2026-01-21 16:08:23.175951567 +0000 UTC m=+0.045121223 container create ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_engelbart, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 21 11:08:23 np0005590810 systemd[1]: Started libpod-conmon-ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63.scope.
Jan 21 11:08:23 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:23 np0005590810 podman[98543]: 2026-01-21 16:08:23.158851626 +0000 UTC m=+0.028021302 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 21 11:08:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 21 11:08:23 np0005590810 podman[98543]: 2026-01-21 16:08:23.327633022 +0000 UTC m=+0.196802698 container init ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_engelbart, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.expose-services=, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, architecture=x86_64, version=2.2.4, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, io.openshift.tags=Ceph keepalived)
Jan 21 11:08:23 np0005590810 podman[98543]: 2026-01-21 16:08:23.334829496 +0000 UTC m=+0.203999152 container start ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_engelbart, vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 21 11:08:23 np0005590810 peaceful_engelbart[98559]: 0 0
Jan 21 11:08:23 np0005590810 systemd[1]: libpod-ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63.scope: Deactivated successfully.
Jan 21 11:08:23 np0005590810 podman[98543]: 2026-01-21 16:08:23.425011978 +0000 UTC m=+0.294181624 container attach ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_engelbart, name=keepalived, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.openshift.expose-services=, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4)
Jan 21 11:08:23 np0005590810 podman[98543]: 2026-01-21 16:08:23.426074651 +0000 UTC m=+0.295244307 container died ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_engelbart, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-type=git, io.openshift.expose-services=, name=keepalived, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, version=2.2.4, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 21 11:08:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 21 11:08:23 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 21 11:08:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 113 pg[10.13( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=111/65 les/c/f=112/66/0 sis=113) [0] r=0 lpr=113 pi=[65,113)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 113 pg[10.13( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=111/65 les/c/f=112/66/0 sis=113) [0] r=0 lpr=113 pi=[65,113)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: Deploying daemon keepalived.rgw.default.compute-0.eqdcyf on compute-0
Jan 21 11:08:24 np0005590810 systemd[1]: var-lib-containers-storage-overlay-cc2fccb2a2a2e42fdd56805e8323c867dc46216ee8cd22728bb45163aa923127-merged.mount: Deactivated successfully.
Jan 21 11:08:24 np0005590810 podman[98543]: 2026-01-21 16:08:24.165501212 +0000 UTC m=+1.034670868 container remove ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_engelbart, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4)
Jan 21 11:08:24 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 1 unknown, 1 remapped+peering, 1 peering, 350 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 21 11:08:24 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:24 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 21 11:08:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 21 11:08:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 114 pg[10.14( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=112/72 les/c/f=113/73/0 sis=114) [0] r=0 lpr=114 pi=[72,114)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 114 pg[10.14( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=5 ec=58/49 lis/c=112/72 les/c/f=113/73/0 sis=114) [0] r=0 lpr=114 pi=[72,114)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:24 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 21 11:08:24 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 114 pg[10.13( v 56'1081 (0'0,56'1081] local-lis/les=113/114 n=5 ec=58/49 lis/c=111/65 les/c/f=112/66/0 sis=113) [0] r=0 lpr=113 pi=[65,113)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:08:24 np0005590810 systemd[1]: libpod-conmon-ffde24807f175275a363f92743171b073fac665b74c8ce666f108c45d7b13f63.scope: Deactivated successfully.
Jan 21 11:08:24 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:24 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:24 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:24 np0005590810 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.eqdcyf for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:24 np0005590810 podman[98709]: 2026-01-21 16:08:24.987890211 +0000 UTC m=+0.041427578 container create 3a5f84d2587cb468e702a7731586641fea6ed06755fece847a301a31b118aca1 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived)
Jan 21 11:08:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04dfe16580a0c0c16bc034f3ab0e4a8ccbf5da2ee275af8e935496ba8b96e3c/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:25 np0005590810 podman[98709]: 2026-01-21 16:08:25.046711159 +0000 UTC m=+0.100248536 container init 3a5f84d2587cb468e702a7731586641fea6ed06755fece847a301a31b118aca1 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.expose-services=, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 21 11:08:25 np0005590810 podman[98709]: 2026-01-21 16:08:25.054113599 +0000 UTC m=+0.107650976 container start 3a5f84d2587cb468e702a7731586641fea6ed06755fece847a301a31b118aca1 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1793, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 21 11:08:25 np0005590810 bash[98709]: 3a5f84d2587cb468e702a7731586641fea6ed06755fece847a301a31b118aca1
Jan 21 11:08:25 np0005590810 podman[98709]: 2026-01-21 16:08:24.968987454 +0000 UTC m=+0.022524841 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 21 11:08:25 np0005590810 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.eqdcyf for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: Failed to bind to process monitoring socket - errno 98 - Address already in use
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: Starting VRRP child process, pid=4
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: Startup complete
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:25 2026: (VI_0) Entering BACKUP STATE
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: (VI_0) Entering BACKUP STATE (init)
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: VRRP_Script(check_backend) succeeded
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:25 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 208433a7-6027-4d80-8f80-b24caa66bb33 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 21 11:08:25 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 208433a7-6027-4d80-8f80-b24caa66bb33 (Updating ingress.rgw.default deployment (+4 -> 4)) in 9 seconds
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:25 np0005590810 ceph-mgr[74671]: [progress INFO root] update: starting ev 77cd8b1d-ee62-447d-a46d-749fea7683e4 (Updating prometheus deployment (+1 -> 1))
Jan 21 11:08:25 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Jan 21 11:08:25 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 21 11:08:25 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 21 11:08:25 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 115 pg[10.14( v 56'1081 (0'0,56'1081] local-lis/les=114/115 n=5 ec=58/49 lis/c=112/72 les/c/f=113/73/0 sis=114) [0] r=0 lpr=114 pi=[72,114)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:25 2026: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:25 2026: (VI_0) received an invalid passwd!
Jan 21 11:08:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:25 2026: (VI_0) Entering MASTER STATE
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 1 unknown, 1 remapped+peering, 1 peering, 350 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:08:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:26.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:08:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:26.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:26 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:26 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:26 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:26 np0005590810 ceph-mon[74380]: Deploying daemon prometheus.compute-0 on compute-0
Jan 21 11:08:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:26 2026: (VI_0) received an invalid passwd!
Jan 21 11:08:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:26 2026: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 28 completed events
Jan 21 11:08:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:08:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:26 np0005590810 ceph-mgr[74671]: [progress WARNING root] Starting Global Recovery Event,3 pgs not in active + clean state
Jan 21 11:08:26 np0005590810 systemd-logind[795]: New session 38 of user zuul.
Jan 21 11:08:26 np0005590810 systemd[1]: Started Session 38 of User zuul.
Jan 21 11:08:27 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc[97077]: Wed Jan 21 16:08:27 2026: (VI_0) received an invalid passwd!
Jan 21 11:08:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:27 2026: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Jan 21 11:08:27 np0005590810 python3.9[99092]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 21 11:08:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/160827 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:08:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 904 B/s wr, 2 op/s; 19 B/s, 2 objects/s recovering
Jan 21 11:08:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 21 11:08:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 21 11:08:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:28.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:28.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 21 11:08:28 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 21 11:08:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 21 11:08:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 21 11:08:28 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 21 11:08:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-rgw-default-compute-0-eqdcyf[98724]: Wed Jan 21 16:08:28 2026: (VI_0) Entering MASTER STATE
Jan 21 11:08:29 np0005590810 python3.9[99282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:08:29 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 21 11:08:29 np0005590810 podman[98825]: 2026-01-21 16:08:29.906678412 +0000 UTC m=+4.011480714 volume create bfb89eb662f5e2dc78b40d432c9c69c4ead4b87d5aa3bc8f55cdfd6f2a34d94b
Jan 21 11:08:29 np0005590810 podman[98825]: 2026-01-21 16:08:29.918960184 +0000 UTC m=+4.023762486 container create 2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13 (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_hoover, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:29 np0005590810 podman[98825]: 2026-01-21 16:08:29.889454906 +0000 UTC m=+3.994257238 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 21 11:08:29 np0005590810 systemd[1]: Started libpod-conmon-2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13.scope.
Jan 21 11:08:29 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7ab24cb6d1f695e84d2cb92e08574289cc537ec51f243875dc833f51d1a335/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:30 np0005590810 podman[98825]: 2026-01-21 16:08:30.002238142 +0000 UTC m=+4.107040454 container init 2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13 (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_hoover, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 podman[98825]: 2026-01-21 16:08:30.010934482 +0000 UTC m=+4.115736784 container start 2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13 (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_hoover, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 podman[98825]: 2026-01-21 16:08:30.014116121 +0000 UTC m=+4.118918423 container attach 2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13 (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_hoover, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 hopeful_hoover[99562]: 65534 65534
Jan 21 11:08:30 np0005590810 systemd[1]: libpod-2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13.scope: Deactivated successfully.
Jan 21 11:08:30 np0005590810 podman[98825]: 2026-01-21 16:08:30.016230137 +0000 UTC m=+4.121032439 container died 2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13 (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_hoover, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8a7ab24cb6d1f695e84d2cb92e08574289cc537ec51f243875dc833f51d1a335-merged.mount: Deactivated successfully.
Jan 21 11:08:30 np0005590810 podman[98825]: 2026-01-21 16:08:30.061479663 +0000 UTC m=+4.166281955 container remove 2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13 (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_hoover, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 podman[98825]: 2026-01-21 16:08:30.065381574 +0000 UTC m=+4.170183896 volume remove bfb89eb662f5e2dc78b40d432c9c69c4ead4b87d5aa3bc8f55cdfd6f2a34d94b
Jan 21 11:08:30 np0005590810 systemd[1]: libpod-conmon-2124257b75b3273576b9e6a6e020762983dcaae424bd394fa28e5897dd3f4f13.scope: Deactivated successfully.
Jan 21 11:08:30 np0005590810 python3.9[99560]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.133307365 +0000 UTC m=+0.040214911 volume create 275b67055c7a98cce6ef2c287f39bee2b401159224d713fbdea70bd0ef4928a0
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.141808239 +0000 UTC m=+0.048715785 container create ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_napier, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 systemd[1]: Started libpod-conmon-ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac.scope.
Jan 21 11:08:30 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:30 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2f0d42b307ce3bd0e64784ea4fc243c5d7d1d152be3bf4c1dac86688e2a5b1/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.117937278 +0000 UTC m=+0.024844844 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.217913165 +0000 UTC m=+0.124820741 container init ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_napier, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.22387091 +0000 UTC m=+0.130778456 container start ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_napier, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 hopeful_napier[99596]: 65534 65534
Jan 21 11:08:30 np0005590810 systemd[1]: libpod-ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac.scope: Deactivated successfully.
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.227668617 +0000 UTC m=+0.134576193 container attach ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_napier, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.228095001 +0000 UTC m=+0.135002547 container died ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_napier, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 853 B/s wr, 2 op/s; 18 B/s, 2 objects/s recovering
Jan 21 11:08:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 21 11:08:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 21 11:08:30 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6f2f0d42b307ce3bd0e64784ea4fc243c5d7d1d152be3bf4c1dac86688e2a5b1-merged.mount: Deactivated successfully.
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.266778783 +0000 UTC m=+0.173686339 container remove ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac (image=quay.io/prometheus/prometheus:v2.51.0, name=hopeful_napier, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:30 np0005590810 podman[99578]: 2026-01-21 16:08:30.271880902 +0000 UTC m=+0.178788468 volume remove 275b67055c7a98cce6ef2c287f39bee2b401159224d713fbdea70bd0ef4928a0
Jan 21 11:08:30 np0005590810 systemd[1]: libpod-conmon-ff94c6c1481d07368f14ab0c360d357e260bb91fd30153be884e0dc35da87bac.scope: Deactivated successfully.
Jan 21 11:08:30 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:30.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:30 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:30 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:08:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:08:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 21 11:08:30 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 21 11:08:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 21 11:08:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 21 11:08:30 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 21 11:08:30 np0005590810 systemd[1]: Reloading.
Jan 21 11:08:30 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:08:30 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:08:30 np0005590810 systemd[1]: Starting Ceph prometheus.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:31 np0005590810 podman[99889]: 2026-01-21 16:08:31.236334806 +0000 UTC m=+0.058318353 container create 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:31 np0005590810 python3.9[99854]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:08:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bfcf46872b7b670dc05f9f24159b15f72e66e542b4402e55ea41128afcf75c/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bfcf46872b7b670dc05f9f24159b15f72e66e542b4402e55ea41128afcf75c/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:31 np0005590810 podman[99889]: 2026-01-21 16:08:31.287429194 +0000 UTC m=+0.109412761 container init 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:31 np0005590810 podman[99889]: 2026-01-21 16:08:31.292593395 +0000 UTC m=+0.114576942 container start 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:31 np0005590810 podman[99889]: 2026-01-21 16:08:31.20940585 +0000 UTC m=+0.031389427 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 21 11:08:31 np0005590810 bash[99889]: 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4
Jan 21 11:08:31 np0005590810 systemd[1]: Started Ceph prometheus.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.331Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.331Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.331Z caller=main.go:623 level=info host_details="(Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 x86_64 compute-0 (none))"
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.331Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.331Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.335Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.336Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.337Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.337Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.343Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.343Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=4.62µs
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.343Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.343Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.343Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=43.152µs wal_replay_duration=293.229µs wbl_replay_duration=290ns total_replay_duration=369.702µs
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.345Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.345Z caller=main.go:1153 level=info msg="TSDB started"
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.345Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.375Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=29.826567ms db_storage=2.1µs remote_storage=2.66µs web_handler=1.03µs query_engine=1.19µs scrape=4.124428ms scrape_sd=206.306µs notify=17.241µs notify_sd=13.78µs rules=24.876613ms tracing=15.781µs
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.375Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0[99906]: ts=2026-01-21T16:08:31.375Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:31 np0005590810 ceph-mgr[74671]: [progress INFO root] complete: finished ev 77cd8b1d-ee62-447d-a46d-749fea7683e4 (Updating prometheus deployment (+1 -> 1))
Jan 21 11:08:31 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 77cd8b1d-ee62-447d-a46d-749fea7683e4 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 21 11:08:31 np0005590810 ceph-mgr[74671]: [progress INFO root] Writing back 29 completed events
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 11:08:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:31 np0005590810 ceph-mgr[74671]: [progress INFO root] Completed event 34d79bba-9087-4d38-abf6-32c466cd9561 (Global Recovery Event) in 5 seconds
Jan 21 11:08:32 np0005590810 python3.9[100074]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:08:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 753 B/s wr, 2 op/s; 16 B/s, 2 objects/s recovering
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 21 11:08:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.ygffhs(active, since 2m), standbys: compute-1.oewgcf, compute-2.kdxyxe
Jan 21 11:08:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:32.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:32 np0005590810 systemd-logind[795]: Session 35 logged out. Waiting for processes to exit.
Jan 21 11:08:32 np0005590810 systemd[1]: session-35.scope: Deactivated successfully.
Jan 21 11:08:32 np0005590810 systemd[1]: session-35.scope: Consumed 50.544s CPU time.
Jan 21 11:08:32 np0005590810 systemd-logind[795]: Removed session 35.
Jan 21 11:08:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setuser ceph since I am not root
Jan 21 11:08:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ignoring --setgroup ceph since I am not root
Jan 21 11:08:32 np0005590810 ceph-mgr[74671]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 21 11:08:32 np0005590810 ceph-mgr[74671]: pidfile_write: ignore empty --pid-file
Jan 21 11:08:32 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'alerts'
Jan 21 11:08:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:32.680+0000 7f89ddd52140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:08:32 np0005590810 ceph-mgr[74671]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 11:08:32 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'balancer'
Jan 21 11:08:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:32.765+0000 7f89ddd52140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:08:32 np0005590810 ceph-mgr[74671]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 11:08:32 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'cephadm'
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 21 11:08:32 np0005590810 python3.9[100247]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 21 11:08:32 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 21 11:08:33 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 2.
Jan 21 11:08:33 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:33 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 2.005s CPU time.
Jan 21 11:08:33 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:33 np0005590810 podman[100383]: 2026-01-21 16:08:33.301496469 +0000 UTC m=+0.043613206 container create 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:08:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc0fffd69588edb52b21ab00fd2434294dd3ff0b497f772bb7dbfb44bf33e37/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc0fffd69588edb52b21ab00fd2434294dd3ff0b497f772bb7dbfb44bf33e37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc0fffd69588edb52b21ab00fd2434294dd3ff0b497f772bb7dbfb44bf33e37/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc0fffd69588edb52b21ab00fd2434294dd3ff0b497f772bb7dbfb44bf33e37/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:33 np0005590810 podman[100383]: 2026-01-21 16:08:33.351271097 +0000 UTC m=+0.093387824 container init 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:08:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:33 np0005590810 podman[100383]: 2026-01-21 16:08:33.359047768 +0000 UTC m=+0.101164505 container start 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:08:33 np0005590810 bash[100383]: 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1
Jan 21 11:08:33 np0005590810 podman[100383]: 2026-01-21 16:08:33.280717544 +0000 UTC m=+0.022834311 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:08:33 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:08:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'crash'
Jan 21 11:08:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:33.582+0000 7f89ddd52140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:08:33 np0005590810 ceph-mgr[74671]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 11:08:33 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'dashboard'
Jan 21 11:08:33 np0005590810 python3.9[100513]: ansible-ansible.builtin.service_facts Invoked
Jan 21 11:08:33 np0005590810 network[100530]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 11:08:33 np0005590810 network[100531]: 'network-scripts' will be removed from distribution in near future.
Jan 21 11:08:33 np0005590810 network[100532]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'devicehealth'
Jan 21 11:08:34 np0005590810 ceph-mon[74380]: from='mgr.14296 192.168.122.100:0/2522074142' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 21 11:08:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:34.260+0000 7f89ddd52140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 11:08:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:08:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:08:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 11:08:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 11:08:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]:  from numpy import show_config as show_numpy_config
Jan 21 11:08:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:34.430+0000 7f89ddd52140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'influx'
Jan 21 11:08:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:34.503+0000 7f89ddd52140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'insights'
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'iostat'
Jan 21 11:08:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:34.641+0000 7f89ddd52140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 11:08:34 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'k8sevents'
Jan 21 11:08:35 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'localpool'
Jan 21 11:08:35 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 11:08:35 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'mirroring'
Jan 21 11:08:35 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'nfs'
Jan 21 11:08:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:35.694+0000 7f89ddd52140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:08:35 np0005590810 ceph-mgr[74671]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 11:08:35 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'orchestrator'
Jan 21 11:08:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:35.948+0000 7f89ddd52140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:08:35 np0005590810 ceph-mgr[74671]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 11:08:35 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 11:08:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:36.038+0000 7f89ddd52140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'osd_support'
Jan 21 11:08:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:36.134+0000 7f89ddd52140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 11:08:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:36.227+0000 7f89ddd52140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'progress'
Jan 21 11:08:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:36.303+0000 7f89ddd52140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'prometheus'
Jan 21 11:08:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:08:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:36.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:08:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:36.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:36.705+0000 7f89ddd52140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rbd_support'
Jan 21 11:08:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:36.814+0000 7f89ddd52140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 11:08:36 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'restful'
Jan 21 11:08:37 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rgw'
Jan 21 11:08:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:37.289+0000 7f89ddd52140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:08:37 np0005590810 ceph-mgr[74671]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 11:08:37 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'rook'
Jan 21 11:08:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:37.865+0000 7f89ddd52140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:08:37 np0005590810 ceph-mgr[74671]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 11:08:37 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'selftest'
Jan 21 11:08:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:37.941+0000 7f89ddd52140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:08:37 np0005590810 ceph-mgr[74671]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 11:08:37 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'snap_schedule'
Jan 21 11:08:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:38.025+0000 7f89ddd52140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'stats'
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'status'
Jan 21 11:08:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:38.177+0000 7f89ddd52140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telegraf'
Jan 21 11:08:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:38.251+0000 7f89ddd52140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'telemetry'
Jan 21 11:08:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:38.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:38.424+0000 7f89ddd52140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 11:08:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:38.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:38 np0005590810 python3.9[100796]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:08:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:38.689+0000 7f89ddd52140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'volumes'
Jan 21 11:08:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:38.993+0000 7f89ddd52140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 11:08:38 np0005590810 ceph-mgr[74671]: mgr[py] Loading python module 'zabbix'
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:39.076+0000 7f89ddd52140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ygffhs restarted
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ygffhs
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x564912869860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map Activating!
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr handle_mgr_map I am now activating
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.ygffhs(active, starting, since 0.0389063s), standbys: compute-1.oewgcf, compute-2.kdxyxe
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.hjphzb"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.hjphzb"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 all = 0
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 all = 0
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.akvqho"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.akvqho"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 all = 0
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ygffhs", "id": "compute-0.ygffhs"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.oewgcf", "id": "compute-1.oewgcf"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-1.oewgcf", "id": "compute-1.oewgcf"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.kdxyxe", "id": "compute-2.kdxyxe"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kdxyxe", "id": "compute-2.kdxyxe"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 all = 1
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: balancer
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Starting
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Manager daemon compute-0.ygffhs is now available
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:08:39
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: Active manager daemon compute-0.ygffhs restarted
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: Activating manager daemon compute-0.ygffhs
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: cephadm
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: crash
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: dashboard
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: devicehealth
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO sso] Loading SSO DB version=1
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: iostat
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Starting
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: nfs
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: orchestrator
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: pg_autoscaler
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: progress
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [progress INFO root] Loading...
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f896a974a90>, <progress.module.GhostEvent object at 0x7f89620fe940>, <progress.module.GhostEvent object at 0x7f895c0ca370>, <progress.module.GhostEvent object at 0x7f895c0ca280>, <progress.module.GhostEvent object at 0x7f895c0ca2b0>, <progress.module.GhostEvent object at 0x7f895c0ca2e0>, <progress.module.GhostEvent object at 0x7f895c0ca1f0>, <progress.module.GhostEvent object at 0x7f895c0ca220>, <progress.module.GhostEvent object at 0x7f895c0ca250>, <progress.module.GhostEvent object at 0x7f895c0ca160>, <progress.module.GhostEvent object at 0x7f895c0ca190>, <progress.module.GhostEvent object at 0x7f895c0ca1c0>, <progress.module.GhostEvent object at 0x7f895c0ca0d0>, <progress.module.GhostEvent object at 0x7f895c0ca100>, <progress.module.GhostEvent object at 0x7f895c0ca130>, <progress.module.GhostEvent object at 0x7f895c0ca040>, <progress.module.GhostEvent object at 0x7f895c0ca070>, <progress.module.GhostEvent object at 0x7f895c0ca0a0>, <progress.module.GhostEvent object at 0x7f895c08f6a0>, <progress.module.GhostEvent object at 0x7f895c08ff70>, <progress.module.GhostEvent object at 0x7f895c08ffa0>, <progress.module.GhostEvent object at 0x7f895c08ffd0>, <progress.module.GhostEvent object at 0x7f895c08fee0>, <progress.module.GhostEvent object at 0x7f895c08ff10>, <progress.module.GhostEvent object at 0x7f895c08ff40>, <progress.module.GhostEvent object at 0x7f895c08fe50>, <progress.module.GhostEvent object at 0x7f895c08fe80>, <progress.module.GhostEvent object at 0x7f895c08feb0>, <progress.module.GhostEvent object at 0x7f895c08fd90>] historic events
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: prometheus
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus INFO root] server_addr: :: server_port: 9283
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus INFO root] Cache enabled
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus INFO root] starting metric collection thread
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:08:39] ENGINE Bus STARTING
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus INFO root] Starting engine...
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:08:39] ENGINE Bus STARTING
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: CherryPy Checker:
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: The Application mounted at '' has an empty config.
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] recovery thread starting
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] starting setup
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: rbd_support
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: restful
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: status
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: telemetry
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [restful INFO root] server_addr: :: server_port: 8003
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [restful WARNING root] server not running: no certificate configured
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] PerfHandler: starting
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: mgr load Constructed class from module: volumes
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:39.291+0000 7f8946ca0640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:39.303+0000 7f894bcaa640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:39.303+0000 7f894bcaa640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:39.303+0000 7f894bcaa640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:39.303+0000 7f894bcaa640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:08:39.303+0000 7f894bcaa640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: client.0 error registering admin socket command: (17) File exists
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TaskHandler: starting
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"} v 0)
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] setup complete
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:08:39] ENGINE Serving on http://:::9283
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:08:39] ENGINE Serving on http://:::9283
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:08:39] ENGINE Bus STARTED
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [prometheus INFO root] Engine started.
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:08:39] ENGINE Bus STARTED
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 21 11:08:39 np0005590810 python3.9[100994]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:08:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:08:39 np0005590810 systemd-logind[795]: New session 39 of user ceph-admin.
Jan 21 11:08:39 np0005590810 systemd[1]: Started Session 39 of User ceph-admin.
Jan 21 11:08:39 np0005590810 ceph-mgr[74671]: [dashboard INFO dashboard.module] Engine started.
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdxyxe restarted
Jan 21 11:08:39 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdxyxe started
Jan 21 11:08:40 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.oewgcf restarted
Jan 21 11:08:40 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.oewgcf started
Jan 21 11:08:40 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.ygffhs(active, since 1.07701s), standbys: compute-2.kdxyxe, compute-1.oewgcf
Jan 21 11:08:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:40 np0005590810 ceph-mon[74380]: Manager daemon compute-0.ygffhs is now available
Jan 21 11:08:40 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:40 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/mirror_snapshot_schedule"}]: dispatch
Jan 21 11:08:40 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ygffhs/trash_purge_schedule"}]: dispatch
Jan 21 11:08:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:40.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:40 np0005590810 podman[101308]: 2026-01-21 16:08:40.384591975 +0000 UTC m=+0.067204860 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:08:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:40.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:40 np0005590810 podman[101308]: 2026-01-21 16:08:40.509794916 +0000 UTC m=+0.192407831 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:08:40 np0005590810 python3.9[101431]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:08:40 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:08:40] ENGINE Bus STARTING
Jan 21 11:08:40 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:08:40] ENGINE Bus STARTING
Jan 21 11:08:40 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:08:40] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:08:40 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:08:40] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:08:41 np0005590810 podman[101531]: 2026-01-21 16:08:41.010165547 +0000 UTC m=+0.106603384 container exec 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:41 np0005590810 podman[101531]: 2026-01-21 16:08:41.046674582 +0000 UTC m=+0.143112389 container exec_died 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:41 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:08:41] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:08:41 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:08:41] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:08:41 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:08:41] ENGINE Bus STARTED
Jan 21 11:08:41 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:08:41] ENGINE Bus STARTED
Jan 21 11:08:41 np0005590810 ceph-mgr[74671]: [cephadm INFO cherrypy.error] [21/Jan/2026:16:08:41] ENGINE Client ('192.168.122.100', 44488) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:08:41 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : [21/Jan/2026:16:08:41] ENGINE Client ('192.168.122.100', 44488) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:08:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 21 11:08:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 21 11:08:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 21 11:08:41 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 21 11:08:41 np0005590810 podman[101661]: 2026-01-21 16:08:41.455665193 +0000 UTC m=+0.155022680 container exec 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:08:41 np0005590810 podman[101733]: 2026-01-21 16:08:41.548612672 +0000 UTC m=+0.072274288 container exec_died 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:08:41 np0005590810 podman[101661]: 2026-01-21 16:08:41.557545409 +0000 UTC m=+0.256902906 container exec_died 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 21 11:08:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 21 11:08:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 21 11:08:41 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.ygffhs(active, since 2s), standbys: compute-2.kdxyxe, compute-1.oewgcf
Jan 21 11:08:41 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 21 11:08:41 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Check health
Jan 21 11:08:41 np0005590810 podman[101866]: 2026-01-21 16:08:41.769903939 +0000 UTC m=+0.052203674 container exec 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:08:41 np0005590810 podman[101866]: 2026-01-21 16:08:41.781659634 +0000 UTC m=+0.063959359 container exec_died 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:08:41 np0005590810 podman[101932]: 2026-01-21 16:08:41.972089162 +0000 UTC m=+0.048592510 container exec e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-type=git, name=keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, release=1793, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 21 11:08:41 np0005590810 podman[101932]: 2026-01-21 16:08:41.985641733 +0000 UTC m=+0.062145061 container exec_died e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, distribution-scope=public, io.openshift.expose-services=, release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc.)
Jan 21 11:08:42 np0005590810 python3.9[101865]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:42 np0005590810 podman[102003]: 2026-01-21 16:08:42.189621414 +0000 UTC m=+0.052625808 container exec 8b88c706f1c281ed839a461eb527042d837bac9b6eb951b300d6634e57c39e36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:42 np0005590810 podman[102003]: 2026-01-21 16:08:42.218592383 +0000 UTC m=+0.081596747 container exec_died 8b88c706f1c281ed839a461eb527042d837bac9b6eb951b300d6634e57c39e36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:08:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:42.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:08:40] ENGINE Bus STARTING
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:08:40] ENGINE Serving on http://192.168.122.100:8765
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:08:41] ENGINE Serving on https://192.168.122.100:7150
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:08:41] ENGINE Bus STARTED
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: [21/Jan/2026:16:08:41] ENGINE Client ('192.168.122.100', 44488) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:42.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:42 np0005590810 python3.9[102171]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:08:42 np0005590810 podman[102130]: 2026-01-21 16:08:42.943890475 +0000 UTC m=+0.340670388 container exec c7b256022c9d0ef0c6be3f0e958a6963d34737af722d182f28ce54bc60120280 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:43 np0005590810 podman[102130]: 2026-01-21 16:08:43.124800918 +0000 UTC m=+0.521580831 container exec_died c7b256022c9d0ef0c6be3f0e958a6963d34737af722d182f28ce54bc60120280 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:08:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 21 11:08:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 21 11:08:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:08:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/160843 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:08:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:08:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:08:44 np0005590810 podman[102280]: 2026-01-21 16:08:44.142707202 +0000 UTC m=+0.521873919 container exec 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:44 np0005590810 podman[102280]: 2026-01-21 16:08:44.178201746 +0000 UTC m=+0.557368463 container exec_died 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.ygffhs(active, since 5s), standbys: compute-2.kdxyxe, compute-1.oewgcf
Jan 21 11:08:44 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 121 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=84/84 les/c/f=85/85/0 sis=121) [0] r=0 lpr=121 pi=[84,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:08:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:44.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:08:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:44.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v8: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000006:nfs.cephfs.2: -2
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:08:45] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Jan 21 11:08:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:08:45] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 21 11:08:45 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 122 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=84/84 les/c/f=85/85/0 sis=122) [0]/[1] r=-1 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:45 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 122 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=84/84 les/c/f=85/85/0 sis=122) [0]/[1] r=-1 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:45 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e400016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 21 11:08:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:08:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:46.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:46 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:46.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:08:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 554 B/s wr, 15 op/s
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:47 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/160847 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:08:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:47 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:47 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: Updating compute-0:/etc/ceph/ceph.conf
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: Updating compute-1:/etc/ceph/ceph.conf
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: Updating compute-2:/etc/ceph/ceph.conf
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 21 11:08:48 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 124 pg[10.19( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=7 ec=58/49 lis/c=122/84 les/c/f=123/85/0 sis=124) [0] r=0 lpr=124 pi=[84,124)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:48 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 124 pg[10.19( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=7 ec=58/49 lis/c=122/84 les/c/f=123/85/0 sis=124) [0] r=0 lpr=124 pi=[84,124)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 11:08:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:48.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.381302) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011728381370, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1190, "num_deletes": 265, "total_data_size": 3565462, "memory_usage": 3727832, "flush_reason": "Manual Compaction"}
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011728400902, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3407959, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8028, "largest_seqno": 9217, "table_properties": {"data_size": 3401977, "index_size": 3181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13386, "raw_average_key_size": 19, "raw_value_size": 3389194, "raw_average_value_size": 4933, "num_data_blocks": 141, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011689, "oldest_key_time": 1769011689, "file_creation_time": 1769011728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 19640 microseconds, and 7901 cpu microseconds.
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.400949) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3407959 bytes OK
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.400970) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.403302) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.403348) EVENT_LOG_v1 {"time_micros": 1769011728403340, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.403369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3559471, prev total WAL file size 3559471, number of live WAL files 2.
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.404282) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323635' seq:0, type:0; will stop at (end)
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3328KB)], [20(8607KB)]
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011728404333, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12222374, "oldest_snapshot_seqno": -1}
Jan 21 11:08:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:48 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:48.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3723 keys, 11666794 bytes, temperature: kUnknown
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011728478502, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11666794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11635912, "index_size": 20468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 95089, "raw_average_key_size": 25, "raw_value_size": 11561626, "raw_average_value_size": 3105, "num_data_blocks": 887, "num_entries": 3723, "num_filter_entries": 3723, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769011728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.478758) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11666794 bytes
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.480416) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.6 rd, 157.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.0) write-amplify(3.4) OK, records in: 4285, records dropped: 562 output_compression: NoCompression
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.480434) EVENT_LOG_v1 {"time_micros": 1769011728480425, "job": 6, "event": "compaction_finished", "compaction_time_micros": 74260, "compaction_time_cpu_micros": 24487, "output_level": 6, "num_output_files": 1, "total_output_size": 11666794, "num_input_records": 4285, "num_output_records": 3723, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011728481321, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011728482918, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.404209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.483053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.483061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.483066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.483068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:08:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:08:48.483069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:48 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v13: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 621 B/s wr, 17 op/s
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.conf
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 21 11:08:49 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:49 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 21 11:08:49 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 125 pg[10.19( v 56'1081 (0'0,56'1081] local-lis/les=124/125 n=7 ec=58/49 lis/c=122/84 les/c/f=123/85/0 sis=124) [0] r=0 lpr=124 pi=[84,124)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:49 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:08:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:49 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:08:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: Updating compute-2:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: Updating compute-0:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: Updating compute-1:/var/lib/ceph/d9745984-fea8-5195-8ec5-61f685b5c785/config/ceph.client.admin.keyring
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:08:50 np0005590810 podman[103536]: 2026-01-21 16:08:50.352791506 +0000 UTC m=+0.036382382 container create f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_driscoll, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:08:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:50.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:50 np0005590810 systemd[1]: Started libpod-conmon-f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4.scope.
Jan 21 11:08:50 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:50 np0005590810 podman[103536]: 2026-01-21 16:08:50.427501008 +0000 UTC m=+0.111091914 container init f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_driscoll, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 11:08:50 np0005590810 podman[103536]: 2026-01-21 16:08:50.338007806 +0000 UTC m=+0.021598702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:50 np0005590810 podman[103536]: 2026-01-21 16:08:50.4353173 +0000 UTC m=+0.118908176 container start f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_driscoll, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:08:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:50 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:50 np0005590810 podman[103536]: 2026-01-21 16:08:50.438235741 +0000 UTC m=+0.121826617 container attach f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:08:50 np0005590810 confident_driscoll[103552]: 167 167
Jan 21 11:08:50 np0005590810 systemd[1]: libpod-f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4.scope: Deactivated successfully.
Jan 21 11:08:50 np0005590810 podman[103536]: 2026-01-21 16:08:50.441494173 +0000 UTC m=+0.125085049 container died f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 11:08:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:08:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:50.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:08:50 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dd265060de32f1e14f30d5c47ab43e12dadc378d8a601359d9bd2dd05c2398d1-merged.mount: Deactivated successfully.
Jan 21 11:08:50 np0005590810 podman[103536]: 2026-01-21 16:08:50.498104672 +0000 UTC m=+0.181695548 container remove f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_driscoll, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:08:50 np0005590810 systemd[1]: libpod-conmon-f33fbb3fe9e3206883a6cc2b65e6d1fb20144e991df2f89fb142a6815cd00aa4.scope: Deactivated successfully.
Jan 21 11:08:50 np0005590810 podman[103576]: 2026-01-21 16:08:50.653750609 +0000 UTC m=+0.044400761 container create fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:08:50 np0005590810 systemd[1]: Started libpod-conmon-fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368.scope.
Jan 21 11:08:50 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e527f8447ca77c7249c929975be42c1981dfc3b915705996e8d60be45b864290/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e527f8447ca77c7249c929975be42c1981dfc3b915705996e8d60be45b864290/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e527f8447ca77c7249c929975be42c1981dfc3b915705996e8d60be45b864290/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e527f8447ca77c7249c929975be42c1981dfc3b915705996e8d60be45b864290/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:50 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e527f8447ca77c7249c929975be42c1981dfc3b915705996e8d60be45b864290/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:50 np0005590810 podman[103576]: 2026-01-21 16:08:50.634657246 +0000 UTC m=+0.025307418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:50 np0005590810 podman[103576]: 2026-01-21 16:08:50.738564405 +0000 UTC m=+0.129214577 container init fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_davinci, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:08:50 np0005590810 podman[103576]: 2026-01-21 16:08:50.743929942 +0000 UTC m=+0.134580094 container start fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:08:50 np0005590810 podman[103576]: 2026-01-21 16:08:50.747336708 +0000 UTC m=+0.137986880 container attach fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 21 11:08:51 np0005590810 naughty_davinci[103594]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:08:51 np0005590810 naughty_davinci[103594]: --> All data devices are unavailable
Jan 21 11:08:51 np0005590810 systemd[1]: libpod-fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368.scope: Deactivated successfully.
Jan 21 11:08:51 np0005590810 podman[103576]: 2026-01-21 16:08:51.066775826 +0000 UTC m=+0.457425978 container died fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_davinci, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:08:51 np0005590810 systemd[1]: var-lib-containers-storage-overlay-e527f8447ca77c7249c929975be42c1981dfc3b915705996e8d60be45b864290-merged.mount: Deactivated successfully.
Jan 21 11:08:51 np0005590810 podman[103576]: 2026-01-21 16:08:51.112947871 +0000 UTC m=+0.503598023 container remove fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:08:51 np0005590810 systemd[1]: libpod-conmon-fd5cdbe745abb9449304b3e42e638c39ba07c275b76e6dc9eacf8840a57bb368.scope: Deactivated successfully.
Jan 21 11:08:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v15: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 188 B/s rd, 0 B/s wr, 0 op/s; 20 B/s, 1 objects/s recovering
Jan 21 11:08:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 21 11:08:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 21 11:08:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 21 11:08:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 21 11:08:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 21 11:08:51 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 21 11:08:51 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 21 11:08:51 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 126 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=91/91 les/c/f=92/92/0 sis=126) [0] r=0 lpr=126 pi=[91,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:51 np0005590810 podman[103713]: 2026-01-21 16:08:51.696463446 +0000 UTC m=+0.040585903 container create 7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 21 11:08:51 np0005590810 systemd[1]: Started libpod-conmon-7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c.scope.
Jan 21 11:08:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:51 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:51 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:51 np0005590810 podman[103713]: 2026-01-21 16:08:51.678814477 +0000 UTC m=+0.022936954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:51 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:51 np0005590810 podman[103713]: 2026-01-21 16:08:51.880768514 +0000 UTC m=+0.224890991 container init 7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:08:51 np0005590810 podman[103713]: 2026-01-21 16:08:51.888726071 +0000 UTC m=+0.232848548 container start 7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:08:51 np0005590810 podman[103713]: 2026-01-21 16:08:51.892379185 +0000 UTC m=+0.236501672 container attach 7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:08:51 np0005590810 ecstatic_hofstadter[103729]: 167 167
Jan 21 11:08:51 np0005590810 systemd[1]: libpod-7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c.scope: Deactivated successfully.
Jan 21 11:08:51 np0005590810 podman[103713]: 2026-01-21 16:08:51.893448158 +0000 UTC m=+0.237570625 container died 7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:08:51 np0005590810 systemd[1]: var-lib-containers-storage-overlay-184ab5e626f33aa6000f2fb683136ce23973fa10ecf24184ff2fe8dc0abfbcda-merged.mount: Deactivated successfully.
Jan 21 11:08:51 np0005590810 podman[103713]: 2026-01-21 16:08:51.922944995 +0000 UTC m=+0.267067452 container remove 7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:08:51 np0005590810 systemd[1]: libpod-conmon-7f1db20466090dfbc045a4d04d6ab30b8e758aeaacaf597f22f0984953b5725c.scope: Deactivated successfully.
Jan 21 11:08:52 np0005590810 podman[103754]: 2026-01-21 16:08:52.082404001 +0000 UTC m=+0.046409743 container create db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_swanson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:08:52 np0005590810 systemd[1]: Started libpod-conmon-db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144.scope.
Jan 21 11:08:52 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6317c583c4863b487207a7ef96c94a5613edcfe05831e870e1a14360965fa4ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6317c583c4863b487207a7ef96c94a5613edcfe05831e870e1a14360965fa4ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6317c583c4863b487207a7ef96c94a5613edcfe05831e870e1a14360965fa4ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6317c583c4863b487207a7ef96c94a5613edcfe05831e870e1a14360965fa4ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:52 np0005590810 podman[103754]: 2026-01-21 16:08:52.063952278 +0000 UTC m=+0.027958050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:52 np0005590810 podman[103754]: 2026-01-21 16:08:52.169646472 +0000 UTC m=+0.133652234 container init db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_swanson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:08:52 np0005590810 podman[103754]: 2026-01-21 16:08:52.178895199 +0000 UTC m=+0.142900941 container start db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:08:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 21 11:08:52 np0005590810 podman[103754]: 2026-01-21 16:08:52.183113911 +0000 UTC m=+0.147119683 container attach db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:08:52 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 21 11:08:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 21 11:08:52 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 21 11:08:52 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 127 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=91/91 les/c/f=92/92/0 sis=127) [0]/[1] r=-1 lpr=127 pi=[91,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:52 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 127 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=91/91 les/c/f=92/92/0 sis=127) [0]/[1] r=-1 lpr=127 pi=[91,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 11:08:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:52.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:52 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]: {
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:    "0": [
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:        {
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "devices": [
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "/dev/loop3"
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            ],
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "lv_name": "ceph_lv0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "lv_size": "21470642176",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "name": "ceph_lv0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "tags": {
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.cluster_name": "ceph",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.crush_device_class": "",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.encrypted": "0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.osd_id": "0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.type": "block",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.vdo": "0",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:                "ceph.with_tpm": "0"
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            },
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "type": "block",
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:            "vg_name": "ceph_vg0"
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:        }
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]:    ]
Jan 21 11:08:52 np0005590810 vigorous_swanson[103770]: }
Jan 21 11:08:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:52.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:52 np0005590810 systemd[1]: libpod-db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144.scope: Deactivated successfully.
Jan 21 11:08:52 np0005590810 podman[103754]: 2026-01-21 16:08:52.498977477 +0000 UTC m=+0.462983219 container died db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:08:52 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6317c583c4863b487207a7ef96c94a5613edcfe05831e870e1a14360965fa4ae-merged.mount: Deactivated successfully.
Jan 21 11:08:52 np0005590810 podman[103754]: 2026-01-21 16:08:52.562656446 +0000 UTC m=+0.526662188 container remove db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:08:52 np0005590810 systemd[1]: libpod-conmon-db27595d752e95748c4b5024786b477a1f849981ad5b353af5596caf0ebf8144.scope: Deactivated successfully.
Jan 21 11:08:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v18: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s; 21 B/s, 1 objects/s recovering
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 21 11:08:53 np0005590810 podman[103892]: 2026-01-21 16:08:53.154666325 +0000 UTC m=+0.044275986 container create d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 21 11:08:53 np0005590810 systemd[1]: Started libpod-conmon-d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5.scope.
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 21 11:08:53 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 21 11:08:53 np0005590810 podman[103892]: 2026-01-21 16:08:53.136698177 +0000 UTC m=+0.026307858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 21 11:08:53 np0005590810 podman[103892]: 2026-01-21 16:08:53.240713009 +0000 UTC m=+0.130322670 container init d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:08:53 np0005590810 podman[103892]: 2026-01-21 16:08:53.250315028 +0000 UTC m=+0.139924719 container start d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mclean, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:08:53 np0005590810 eager_mclean[103907]: 167 167
Jan 21 11:08:53 np0005590810 podman[103892]: 2026-01-21 16:08:53.254927351 +0000 UTC m=+0.144537032 container attach d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:08:53 np0005590810 systemd[1]: libpod-d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5.scope: Deactivated successfully.
Jan 21 11:08:53 np0005590810 podman[103892]: 2026-01-21 16:08:53.256007375 +0000 UTC m=+0.145617046 container died d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mclean, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:08:53 np0005590810 systemd[1]: var-lib-containers-storage-overlay-338529a588f4fdaee1079ae139483c2a6cd8e1a006d1b915f1bb87779742912d-merged.mount: Deactivated successfully.
Jan 21 11:08:53 np0005590810 podman[103892]: 2026-01-21 16:08:53.297260507 +0000 UTC m=+0.186870168 container remove d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mclean, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:08:53 np0005590810 systemd[1]: libpod-conmon-d755d996c3eef7ad1ae2aae941acb900d589a4d570d52f72648d763bdd1745c5.scope: Deactivated successfully.
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 21 11:08:53 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 21 11:08:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 129 pg[10.1b( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=2 ec=58/49 lis/c=127/91 les/c/f=128/92/0 sis=129) [0] r=0 lpr=129 pi=[91,129)/1 luod=0'0 crt=56'1081 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 21 11:08:53 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 129 pg[10.1b( v 56'1081 (0'0,56'1081] local-lis/les=0/0 n=2 ec=58/49 lis/c=127/91 les/c/f=128/92/0 sis=129) [0] r=0 lpr=129 pi=[91,129)/1 crt=56'1081 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 11:08:53 np0005590810 podman[103930]: 2026-01-21 16:08:53.491583497 +0000 UTC m=+0.047761326 container create 6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:08:53 np0005590810 systemd[1]: Started libpod-conmon-6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a.scope.
Jan 21 11:08:53 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:53 np0005590810 podman[103930]: 2026-01-21 16:08:53.471389199 +0000 UTC m=+0.027567038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026bc1aa7e90c151190bada4f674e2f01b5626bf981e52ee7b564844ca9094ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026bc1aa7e90c151190bada4f674e2f01b5626bf981e52ee7b564844ca9094ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026bc1aa7e90c151190bada4f674e2f01b5626bf981e52ee7b564844ca9094ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026bc1aa7e90c151190bada4f674e2f01b5626bf981e52ee7b564844ca9094ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:53 np0005590810 podman[103930]: 2026-01-21 16:08:53.582879374 +0000 UTC m=+0.139057243 container init 6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:08:53 np0005590810 podman[103930]: 2026-01-21 16:08:53.59497519 +0000 UTC m=+0.151152999 container start 6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:08:53 np0005590810 podman[103930]: 2026-01-21 16:08:53.598147459 +0000 UTC m=+0.154325298 container attach 6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:08:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:53 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:53 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 21 11:08:54 np0005590810 lvm[104021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:08:54 np0005590810 lvm[104021]: VG ceph_vg0 finished
Jan 21 11:08:54 np0005590810 xenodochial_dewdney[103947]: {}
Jan 21 11:08:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:54.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 21 11:08:54 np0005590810 systemd[1]: libpod-6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a.scope: Deactivated successfully.
Jan 21 11:08:54 np0005590810 podman[103930]: 2026-01-21 16:08:54.404708985 +0000 UTC m=+0.960886804 container died 6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_dewdney, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:08:54 np0005590810 systemd[1]: libpod-6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a.scope: Consumed 1.248s CPU time.
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 21 11:08:54 np0005590810 ceph-osd[82794]: osd.0 pg_epoch: 130 pg[10.1b( v 56'1081 (0'0,56'1081] local-lis/les=129/130 n=2 ec=58/49 lis/c=127/91 les/c/f=128/92/0 sis=129) [0] r=0 lpr=129 pi=[91,129)/1 crt=56'1081 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 11:08:54 np0005590810 systemd[1]: var-lib-containers-storage-overlay-026bc1aa7e90c151190bada4f674e2f01b5626bf981e52ee7b564844ca9094ac-merged.mount: Deactivated successfully.
Jan 21 11:08:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:54 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:54 np0005590810 podman[103930]: 2026-01-21 16:08:54.452503241 +0000 UTC m=+1.008681060 container remove 6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_dewdney, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:08:54 np0005590810 systemd[1]: libpod-conmon-6e49e42b807058e0ce95e953c2362b644d201828981471e0264439c53bcdc71a.scope: Deactivated successfully.
Jan 21 11:08:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:54.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:54 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:08:54 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:08:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:08:54 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 11:08:54 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 11:08:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 21 11:08:55 np0005590810 podman[104155]: 2026-01-21 16:08:55.18510362 +0000 UTC m=+0.043655938 container create 45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5 (image=quay.io/ceph/ceph:v19, name=mystifying_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:08:55 np0005590810 systemd[1]: Started libpod-conmon-45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5.scope.
Jan 21 11:08:55 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:55 np0005590810 podman[104155]: 2026-01-21 16:08:55.166079918 +0000 UTC m=+0.024632256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:08:55 np0005590810 podman[104155]: 2026-01-21 16:08:55.276707957 +0000 UTC m=+0.135260305 container init 45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5 (image=quay.io/ceph/ceph:v19, name=mystifying_ellis, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:08:55 np0005590810 podman[104155]: 2026-01-21 16:08:55.28582814 +0000 UTC m=+0.144380458 container start 45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5 (image=quay.io/ceph/ceph:v19, name=mystifying_ellis, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:08:55 np0005590810 podman[104155]: 2026-01-21 16:08:55.289794783 +0000 UTC m=+0.148347101 container attach 45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5 (image=quay.io/ceph/ceph:v19, name=mystifying_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:08:55 np0005590810 mystifying_ellis[104171]: 167 167
Jan 21 11:08:55 np0005590810 systemd[1]: libpod-45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5.scope: Deactivated successfully.
Jan 21 11:08:55 np0005590810 podman[104155]: 2026-01-21 16:08:55.293294012 +0000 UTC m=+0.151846320 container died 45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5 (image=quay.io/ceph/ceph:v19, name=mystifying_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:08:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-772e64657330ba26c4bafedd9c6b3168eccef419f7e50e9ecec2732af2a89259-merged.mount: Deactivated successfully.
Jan 21 11:08:55 np0005590810 podman[104155]: 2026-01-21 16:08:55.329098005 +0000 UTC m=+0.187650323 container remove 45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5 (image=quay.io/ceph/ceph:v19, name=mystifying_ellis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 21 11:08:55 np0005590810 systemd[1]: libpod-conmon-45f46d6dcc06906135112fc7584d7151e5177c32790ebd381f9613ee39ce4ca5.scope: Deactivated successfully.
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:55 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ygffhs (monmap changed)...
Jan 21 11:08:55 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ygffhs (monmap changed)...
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ygffhs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ygffhs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:08:55 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ygffhs on compute-0
Jan 21 11:08:55 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ygffhs on compute-0
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 21 11:08:55 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 21 11:08:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:08:55] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:08:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:08:55] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:08:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:55 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c002bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:55 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:55 np0005590810 podman[104255]: 2026-01-21 16:08:55.861962266 +0000 UTC m=+0.046592729 container create 94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59 (image=quay.io/ceph/ceph:v19, name=relaxed_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:08:55 np0005590810 systemd[1]: Started libpod-conmon-94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59.scope.
Jan 21 11:08:55 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:55 np0005590810 podman[104255]: 2026-01-21 16:08:55.844003577 +0000 UTC m=+0.028634060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 21 11:08:55 np0005590810 podman[104255]: 2026-01-21 16:08:55.938100832 +0000 UTC m=+0.122731305 container init 94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59 (image=quay.io/ceph/ceph:v19, name=relaxed_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:08:55 np0005590810 podman[104255]: 2026-01-21 16:08:55.943836501 +0000 UTC m=+0.128466954 container start 94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59 (image=quay.io/ceph/ceph:v19, name=relaxed_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 21 11:08:55 np0005590810 relaxed_curran[104271]: 167 167
Jan 21 11:08:55 np0005590810 systemd[1]: libpod-94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59.scope: Deactivated successfully.
Jan 21 11:08:55 np0005590810 podman[104255]: 2026-01-21 16:08:55.947260086 +0000 UTC m=+0.131890569 container attach 94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59 (image=quay.io/ceph/ceph:v19, name=relaxed_curran, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:08:55 np0005590810 podman[104255]: 2026-01-21 16:08:55.948558787 +0000 UTC m=+0.133189250 container died 94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59 (image=quay.io/ceph/ceph:v19, name=relaxed_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:08:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ee5f5f84accd518890311f2cb76668867bb018f87b1ea331cce4b463cd0a6e9a-merged.mount: Deactivated successfully.
Jan 21 11:08:55 np0005590810 podman[104255]: 2026-01-21 16:08:55.986680921 +0000 UTC m=+0.171311384 container remove 94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59 (image=quay.io/ceph/ceph:v19, name=relaxed_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:08:55 np0005590810 systemd[1]: libpod-conmon-94ec559030f4b0a432c7ccc43d392b3de2a9ad12cdb8ce03d6b64e71c8d31d59.scope: Deactivated successfully.
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:56 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 21 11:08:56 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:08:56 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 21 11:08:56 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 21 11:08:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:56.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: Reconfiguring mgr.compute-0.ygffhs (monmap changed)...
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ygffhs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: Reconfiguring daemon mgr.compute-0.ygffhs on compute-0
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:08:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:56 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:56.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:56 np0005590810 podman[104354]: 2026-01-21 16:08:56.539056999 +0000 UTC m=+0.045456844 container create da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:08:56 np0005590810 systemd[1]: Started libpod-conmon-da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92.scope.
Jan 21 11:08:56 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:56 np0005590810 podman[104354]: 2026-01-21 16:08:56.519606695 +0000 UTC m=+0.026006570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:56 np0005590810 podman[104354]: 2026-01-21 16:08:56.699429604 +0000 UTC m=+0.205829459 container init da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:08:56 np0005590810 podman[104354]: 2026-01-21 16:08:56.705801951 +0000 UTC m=+0.212201796 container start da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:08:56 np0005590810 podman[104354]: 2026-01-21 16:08:56.709650351 +0000 UTC m=+0.216050246 container attach da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:08:56 np0005590810 nifty_mahavira[104370]: 167 167
Jan 21 11:08:56 np0005590810 systemd[1]: libpod-da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92.scope: Deactivated successfully.
Jan 21 11:08:56 np0005590810 podman[104354]: 2026-01-21 16:08:56.712281103 +0000 UTC m=+0.218680948 container died da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:08:56 np0005590810 systemd[1]: var-lib-containers-storage-overlay-073607552b68fcb302b95019ce5aa4e152f52ca87b2bf4e9faf248f15fbb334a-merged.mount: Deactivated successfully.
Jan 21 11:08:56 np0005590810 podman[104354]: 2026-01-21 16:08:56.74755812 +0000 UTC m=+0.253957965 container remove da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:08:56 np0005590810 systemd[1]: libpod-conmon-da5e3fc2a0389632ea0218875903db4952acea362b9a02908f6265fdfc8bfe92.scope: Deactivated successfully.
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:56 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 21 11:08:56 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:08:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:08:56 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 21 11:08:56 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 21 11:08:57 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:34260] [POST] [200] [0.123s] [4.0B] [9eda2630-182f-4315-a49e-5d9976c74c2c] /api/prometheus_receiver
Jan 21 11:08:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 21 11:08:57 np0005590810 podman[104455]: 2026-01-21 16:08:57.295913952 +0000 UTC m=+0.044121213 container create c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:08:57 np0005590810 systemd[1]: Started libpod-conmon-c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9.scope.
Jan 21 11:08:57 np0005590810 podman[104455]: 2026-01-21 16:08:57.275791536 +0000 UTC m=+0.023998787 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:08:57 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:57 np0005590810 podman[104455]: 2026-01-21 16:08:57.397510159 +0000 UTC m=+0.145717390 container init c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:08:57 np0005590810 podman[104455]: 2026-01-21 16:08:57.405885439 +0000 UTC m=+0.154092660 container start c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_vaughan, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:08:57 np0005590810 podman[104455]: 2026-01-21 16:08:57.409940906 +0000 UTC m=+0.158148157 container attach c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 11:08:57 np0005590810 nostalgic_vaughan[104471]: 167 167
Jan 21 11:08:57 np0005590810 systemd[1]: libpod-c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9.scope: Deactivated successfully.
Jan 21 11:08:57 np0005590810 podman[104455]: 2026-01-21 16:08:57.412901818 +0000 UTC m=+0.161109049 container died c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 21 11:08:57 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c3ccc6efb4d7927c04dd0eccf7b9d5816b668efa87a1daa781e0ae70bba3f683-merged.mount: Deactivated successfully.
Jan 21 11:08:57 np0005590810 podman[104455]: 2026-01-21 16:08:57.456187252 +0000 UTC m=+0.204394483 container remove c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_vaughan, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:08:57 np0005590810 systemd[1]: libpod-conmon-c42a5d8298f5bc7dd6923fe5babca5d11fde85ff0a09e2608748f310bf4a78f9.scope: Deactivated successfully.
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:57 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 21 11:08:57 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 21 11:08:57 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 21 11:08:57 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 21 11:08:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:57 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:57 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c002bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 21 11:08:57 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.072738224 +0000 UTC m=+0.047596280 volume create 45940b34e71e2caa039ef97ebe972c6911a53aef94831484bd282a82892ade4a
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.081768675 +0000 UTC m=+0.056626731 container create 494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 systemd[1]: Started libpod-conmon-494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693.scope.
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.056859911 +0000 UTC m=+0.031717987 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 21 11:08:58 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7539e4485611e7e55adda4d90445e410b411be57249fd13754e35210d4939b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.171855845 +0000 UTC m=+0.146713931 container init 494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.182744063 +0000 UTC m=+0.157602119 container start 494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 boring_payne[104575]: 65534 65534
Jan 21 11:08:58 np0005590810 systemd[1]: libpod-494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693.scope: Deactivated successfully.
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.186619184 +0000 UTC m=+0.161477240 container attach 494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.187743669 +0000 UTC m=+0.162601725 container died 494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ff7539e4485611e7e55adda4d90445e410b411be57249fd13754e35210d4939b-merged.mount: Deactivated successfully.
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.226432481 +0000 UTC m=+0.201290537 container remove 494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 podman[104559]: 2026-01-21 16:08:58.231065165 +0000 UTC m=+0.205923231 volume remove 45940b34e71e2caa039ef97ebe972c6911a53aef94831484bd282a82892ade4a
Jan 21 11:08:58 np0005590810 systemd[1]: libpod-conmon-494c03746c291a6e8a8d4659d0a89444e142a6a8b5dc26477c274a710263b693.scope: Deactivated successfully.
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.292062331 +0000 UTC m=+0.037592560 volume create a48a470ab9948db6b3be5290b8835844635ce626849d8bdedb67a6a878024942
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.298583424 +0000 UTC m=+0.044113643 container create 0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 systemd[1]: Started libpod-conmon-0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879.scope.
Jan 21 11:08:58 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:08:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32475887fcac558dcbcb8df59bef37ad0b436209258dbdc5db5688eb8297543c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.364777041 +0000 UTC m=+0.110307280 container init 0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.370701975 +0000 UTC m=+0.116232194 container start 0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 vigorous_merkle[104607]: 65534 65534
Jan 21 11:08:58 np0005590810 systemd[1]: libpod-0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879.scope: Deactivated successfully.
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.374239925 +0000 UTC m=+0.119770174 container attach 0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.279713957 +0000 UTC m=+0.025244196 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.374928246 +0000 UTC m=+0.120458495 container died 0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:08:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:08:58.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 21 11:08:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:58 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:08:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:08:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:08:58.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 21 11:08:58 np0005590810 systemd[1]: var-lib-containers-storage-overlay-32475887fcac558dcbcb8df59bef37ad0b436209258dbdc5db5688eb8297543c-merged.mount: Deactivated successfully.
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: Reconfiguring osd.0 (monmap changed)...
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: Reconfiguring daemon osd.0 on compute-0
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:58 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.582447286 +0000 UTC m=+0.327977505 container remove 0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 podman[104591]: 2026-01-21 16:08:58.585655115 +0000 UTC m=+0.331185344 volume remove a48a470ab9948db6b3be5290b8835844635ce626849d8bdedb67a6a878024942
Jan 21 11:08:58 np0005590810 systemd[1]: libpod-conmon-0d5ab8072cda8cca7609698be50ec72d8db7e8dfa0673d9a1b2bc2fe74134879.scope: Deactivated successfully.
Jan 21 11:08:58 np0005590810 systemd[1]: Stopping Ceph alertmanager.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[97509]: ts=2026-01-21T16:08:58.809Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Jan 21 11:08:58 np0005590810 podman[104655]: 2026-01-21 16:08:58.819409141 +0000 UTC m=+0.045522236 container died 8b88c706f1c281ed839a461eb527042d837bac9b6eb951b300d6634e57c39e36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ddf90896e5c47b76a17693b905bfa012c197a8202311d88c1fc9b37583433f8b-merged.mount: Deactivated successfully.
Jan 21 11:08:58 np0005590810 podman[104655]: 2026-01-21 16:08:58.853389156 +0000 UTC m=+0.079502241 container remove 8b88c706f1c281ed839a461eb527042d837bac9b6eb951b300d6634e57c39e36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:58 np0005590810 podman[104655]: 2026-01-21 16:08:58.857210265 +0000 UTC m=+0.083323360 volume remove fc0bbe8d4d755110c76dfe8e47f4663ca949c121ab4ebe1a937ea76269d98e42
Jan 21 11:08:58 np0005590810 bash[104655]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0
Jan 21 11:08:58 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@alertmanager.compute-0.service: Deactivated successfully.
Jan 21 11:08:58 np0005590810 systemd[1]: Stopped Ceph alertmanager.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:58 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@alertmanager.compute-0.service: Consumed 1.060s CPU time.
Jan 21 11:08:59 np0005590810 systemd[1]: Starting Ceph alertmanager.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:08:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v27: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:08:59 np0005590810 podman[104760]: 2026-01-21 16:08:59.21838508 +0000 UTC m=+0.040135239 volume create ca11371d7839313de4712bbd08ddc94b696143f80ca478ee0ce27449ba65da62
Jan 21 11:08:59 np0005590810 podman[104760]: 2026-01-21 16:08:59.229509396 +0000 UTC m=+0.051259555 container create 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1398e520c6d5d5187e863799a8f72eda27cdb58ca0471239a0a39d5bdf1b2c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1398e520c6d5d5187e863799a8f72eda27cdb58ca0471239a0a39d5bdf1b2c/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 21 11:08:59 np0005590810 podman[104760]: 2026-01-21 16:08:59.29431494 +0000 UTC m=+0.116065139 container init 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:59 np0005590810 podman[104760]: 2026-01-21 16:08:59.204705325 +0000 UTC m=+0.026455494 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 21 11:08:59 np0005590810 podman[104760]: 2026-01-21 16:08:59.299716528 +0000 UTC m=+0.121466697 container start 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:08:59 np0005590810 bash[104760]: 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3
Jan 21 11:08:59 np0005590810 systemd[1]: Started Ceph alertmanager.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:08:59.325Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:08:59.325Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:08:59.336Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:08:59.337Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:08:59.374Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:08:59.375Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:08:59.379Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:08:59.379Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:59 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 21 11:08:59 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 21 11:08:59 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Jan 21 11:08:59 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 11:08:59 np0005590810 ceph-mon[74380]: Reconfiguring daemon grafana.compute-0 on compute-0
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:59 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:08:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:08:59 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:00 np0005590810 podman[104862]: 2026-01-21 16:09:00.101248749 +0000 UTC m=+0.051869224 container create 2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a (image=quay.io/ceph/grafana:10.4.0, name=laughing_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 systemd[1]: Started libpod-conmon-2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a.scope.
Jan 21 11:09:00 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:09:00 np0005590810 podman[104862]: 2026-01-21 16:09:00.075102796 +0000 UTC m=+0.025723291 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 21 11:09:00 np0005590810 podman[104862]: 2026-01-21 16:09:00.180365577 +0000 UTC m=+0.130986122 container init 2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a (image=quay.io/ceph/grafana:10.4.0, name=laughing_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 podman[104862]: 2026-01-21 16:09:00.189264764 +0000 UTC m=+0.139885269 container start 2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a (image=quay.io/ceph/grafana:10.4.0, name=laughing_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 laughing_rhodes[104878]: 472 0
Jan 21 11:09:00 np0005590810 podman[104862]: 2026-01-21 16:09:00.193097663 +0000 UTC m=+0.143718138 container attach 2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a (image=quay.io/ceph/grafana:10.4.0, name=laughing_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 systemd[1]: libpod-2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a.scope: Deactivated successfully.
Jan 21 11:09:00 np0005590810 podman[104862]: 2026-01-21 16:09:00.194167406 +0000 UTC m=+0.144787921 container died 2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a (image=quay.io/ceph/grafana:10.4.0, name=laughing_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 systemd[1]: var-lib-containers-storage-overlay-78cbea808cdfd1f01367a850ef2b25a6b69a4c131df422fedb1f9e2a00eaf51d-merged.mount: Deactivated successfully.
Jan 21 11:09:00 np0005590810 podman[104862]: 2026-01-21 16:09:00.233829549 +0000 UTC m=+0.184450024 container remove 2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a (image=quay.io/ceph/grafana:10.4.0, name=laughing_rhodes, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 systemd[1]: libpod-conmon-2f747f02fa3f587608c1542c3130053df3389c36a423d9cc2de5c47366194c0a.scope: Deactivated successfully.
Jan 21 11:09:00 np0005590810 podman[104895]: 2026-01-21 16:09:00.306201978 +0000 UTC m=+0.050297884 container create 3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff (image=quay.io/ceph/grafana:10.4.0, name=funny_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 systemd[1]: Started libpod-conmon-3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff.scope.
Jan 21 11:09:00 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:09:00 np0005590810 podman[104895]: 2026-01-21 16:09:00.283434131 +0000 UTC m=+0.027530067 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 21 11:09:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:00.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:00 np0005590810 podman[104895]: 2026-01-21 16:09:00.390698454 +0000 UTC m=+0.134794380 container init 3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff (image=quay.io/ceph/grafana:10.4.0, name=funny_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 podman[104895]: 2026-01-21 16:09:00.397019311 +0000 UTC m=+0.141115217 container start 3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff (image=quay.io/ceph/grafana:10.4.0, name=funny_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 podman[104895]: 2026-01-21 16:09:00.400367785 +0000 UTC m=+0.144463721 container attach 3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff (image=quay.io/ceph/grafana:10.4.0, name=funny_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 funny_heyrovsky[104911]: 472 0
Jan 21 11:09:00 np0005590810 systemd[1]: libpod-3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff.scope: Deactivated successfully.
Jan 21 11:09:00 np0005590810 podman[104895]: 2026-01-21 16:09:00.401398297 +0000 UTC m=+0.145494203 container died 3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff (image=quay.io/ceph/grafana:10.4.0, name=funny_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 21 11:09:00 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ec962c9bbb32470383a966893b879cd9c82ac2664f9e05bcfd000b4865d13f98-merged.mount: Deactivated successfully.
Jan 21 11:09:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 21 11:09:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 21 11:09:00 np0005590810 podman[104895]: 2026-01-21 16:09:00.438652135 +0000 UTC m=+0.182748041 container remove 3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff (image=quay.io/ceph/grafana:10.4.0, name=funny_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:00 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:00 np0005590810 systemd[1]: libpod-conmon-3ff055001b1a8a32e44af90cbade02f15d8a952cf5206aed0e723ff80e55b0ff.scope: Deactivated successfully.
Jan 21 11:09:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:09:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:00.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:09:00 np0005590810 systemd[1]: Stopping Ceph grafana.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:09:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=server t=2026-01-21T16:09:00.720395801Z level=info msg="Shutdown started" reason="System signal: terminated"
Jan 21 11:09:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=tracing t=2026-01-21T16:09:00.720551776Z level=info msg="Closing tracing"
Jan 21 11:09:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=grafana-apiserver t=2026-01-21T16:09:00.72070787Z level=info msg="StorageObjectCountTracker pruner is exiting"
Jan 21 11:09:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=ticker t=2026-01-21T16:09:00.720797753Z level=info msg=stopped last_tick=2026-01-21T16:09:00Z
Jan 21 11:09:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[98089]: logger=sqlstore.transactions t=2026-01-21T16:09:00.732890109Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 21 11:09:00 np0005590810 podman[104961]: 2026-01-21 16:09:00.750857318 +0000 UTC m=+0.068725487 container died c7b256022c9d0ef0c6be3f0e958a6963d34737af722d182f28ce54bc60120280 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 systemd[1]: var-lib-containers-storage-overlay-be0e70969c431154804ce3bc79e6dfdd0ccb46bbd29f334538dfccad838075e1-merged.mount: Deactivated successfully.
Jan 21 11:09:00 np0005590810 podman[104961]: 2026-01-21 16:09:00.801407809 +0000 UTC m=+0.119275978 container remove c7b256022c9d0ef0c6be3f0e958a6963d34737af722d182f28ce54bc60120280 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:00 np0005590810 bash[104961]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0
Jan 21 11:09:00 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@grafana.compute-0.service: Deactivated successfully.
Jan 21 11:09:00 np0005590810 systemd[1]: Stopped Ceph grafana.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:09:00 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@grafana.compute-0.service: Consumed 4.242s CPU time.
Jan 21 11:09:00 np0005590810 systemd[1]: Starting Ceph grafana.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:09:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 21 11:09:01 np0005590810 podman[105066]: 2026-01-21 16:09:01.170725207 +0000 UTC m=+0.048614082 container create 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf013805afdcbfc3e267f0fd25c1d8b666addc5e61808d8d46084ee36219cfa/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf013805afdcbfc3e267f0fd25c1d8b666addc5e61808d8d46084ee36219cfa/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf013805afdcbfc3e267f0fd25c1d8b666addc5e61808d8d46084ee36219cfa/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf013805afdcbfc3e267f0fd25c1d8b666addc5e61808d8d46084ee36219cfa/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf013805afdcbfc3e267f0fd25c1d8b666addc5e61808d8d46084ee36219cfa/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:01 np0005590810 podman[105066]: 2026-01-21 16:09:01.222837966 +0000 UTC m=+0.100726861 container init 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:01 np0005590810 podman[105066]: 2026-01-21 16:09:01.230435182 +0000 UTC m=+0.108324057 container start 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:01 np0005590810 bash[105066]: 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e
Jan 21 11:09:01 np0005590810 podman[105066]: 2026-01-21 16:09:01.14859808 +0000 UTC m=+0.026486985 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 21 11:09:01 np0005590810 systemd[1]: Started Ceph grafana.compute-0 for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:09:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:09:01.338Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000873415s
Jan 21 11:09:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.433730171Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-21T16:09:01Z
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434341869Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.43435908Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.43436667Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.43437089Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.43437461Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434378711Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434382541Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434387171Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434391481Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434395751Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434401841Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434405841Z level=info msg=Target target=[all]
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434417092Z level=info msg="Path Home" path=/usr/share/grafana
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434421042Z level=info msg="Path Data" path=/var/lib/grafana
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434424542Z level=info msg="Path Logs" path=/var/log/grafana
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434427792Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434431172Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=settings t=2026-01-21T16:09:01.434434732Z level=info msg="App mode production"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=sqlstore t=2026-01-21T16:09:01.434910767Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=sqlstore t=2026-01-21T16:09:01.434938298Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=migrator t=2026-01-21T16:09:01.435666671Z level=info msg="Starting DB migrations"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=migrator t=2026-01-21T16:09:01.452716651Z level=info msg="migrations completed" performed=0 skipped=547 duration=825.246µs
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=sqlstore t=2026-01-21T16:09:01.453710352Z level=info msg="Created default organization"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=secrets t=2026-01-21T16:09:01.454204457Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=plugin.store t=2026-01-21T16:09:01.474998063Z level=info msg="Loading plugins..."
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=local.finder t=2026-01-21T16:09:01.556976862Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=plugin.store t=2026-01-21T16:09:01.557139637Z level=info msg="Plugins loaded" count=55 duration=82.142914ms
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=query_data t=2026-01-21T16:09:01.560207672Z level=info msg="Query Service initialization"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=live.push_http t=2026-01-21T16:09:01.56339363Z level=info msg="Live Push Gateway initialization"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=ngalert.migration t=2026-01-21T16:09:01.566616721Z level=info msg=Starting
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=ngalert.state.manager t=2026-01-21T16:09:01.579528042Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=infra.usagestats.collector t=2026-01-21T16:09:01.581516414Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=provisioning.datasources t=2026-01-21T16:09:01.583748343Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=provisioning.alerting t=2026-01-21T16:09:01.606728398Z level=info msg="starting to provision alerting"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=provisioning.alerting t=2026-01-21T16:09:01.606881662Z level=info msg="finished to provision alerting"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=ngalert.state.manager t=2026-01-21T16:09:01.607068698Z level=info msg="Warming state cache for startup"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=ngalert.state.manager t=2026-01-21T16:09:01.607386568Z level=info msg="State cache has been initialized" states=0 duration=317.35µs
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=grafanaStorageLogger t=2026-01-21T16:09:01.608719559Z level=info msg="Storage starting"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=ngalert.multiorg.alertmanager t=2026-01-21T16:09:01.608945896Z level=info msg="Starting MultiOrg Alertmanager"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=ngalert.scheduler t=2026-01-21T16:09:01.609102801Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=ticker t=2026-01-21T16:09:01.609234945Z level=info msg=starting first_tick=2026-01-21T16:09:10Z
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=http.server t=2026-01-21T16:09:01.610048891Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=http.server t=2026-01-21T16:09:01.610623638Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=provisioning.dashboard t=2026-01-21T16:09:01.654789371Z level=info msg="starting to provision dashboards"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=provisioning.dashboard t=2026-01-21T16:09:01.671099768Z level=info msg="finished to provision dashboards"
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=plugins.update.checker t=2026-01-21T16:09:01.677539749Z level=info msg="Update check succeeded" duration=68.659424ms
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=grafana.update.checker t=2026-01-21T16:09:01.679410896Z level=info msg="Update check succeeded" duration=71.823852ms
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:01 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:01 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=grafana-apiserver t=2026-01-21T16:09:02.188936552Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 21 11:09:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=grafana-apiserver t=2026-01-21T16:09:02.189889312Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 21 11:09:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:02.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:02 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v31: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 579 B/s rd, 0 op/s
Jan 21 11:09:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:09:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 21 11:09:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 21 11:09:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:03 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:03 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:04.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:04 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:09:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:05 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 21 11:09:05 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 21 11:09:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 21 11:09:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:09:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:05 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 21 11:09:05 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 21 11:09:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:05] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:09:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:05] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:09:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:05 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:05 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:05 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:06.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:06 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:09:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:06.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:06 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 21 11:09:06 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:06 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 21 11:09:06 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:06 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 21 11:09:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v34: 353 pgs: 1 activating+remapped, 1 activating, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 398 B/s rd, 0 op/s; 5/224 objects misplaced (2.232%); 0 B/s, 0 objects/s recovering
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:09:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:07 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:07 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-1.oewgcf (monmap changed)...
Jan 21 11:09:07 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-1.oewgcf (monmap changed)...
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.oewgcf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oewgcf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:07 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-1.oewgcf on compute-1
Jan 21 11:09:07 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-1.oewgcf on compute-1
Jan 21 11:09:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:07 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 21 11:09:07 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: Reconfiguring osd.1 (monmap changed)...
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: Reconfiguring daemon osd.1 on compute-1
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oewgcf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:09:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:08.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:08 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:08.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:08 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 21 11:09:08 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:08 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 21 11:09:08 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 21 11:09:08 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v37: 353 pgs: 1 activating+remapped, 1 activating, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5/224 objects misplaced (2.232%); 54 B/s, 2 objects/s recovering
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: Reconfiguring mgr.compute-1.oewgcf (monmap changed)...
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: Reconfiguring daemon mgr.compute-1.oewgcf on compute-1
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:09:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:09:09.341Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003347275s
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (unknown last config time)...
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (unknown last config time)...
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 21 11:09:09 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 21 11:09:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:09 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:09 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:10 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-2 (unknown last config time)...
Jan 21 11:09:10 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-2 (unknown last config time)...
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:10 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-2 on compute-2
Jan 21 11:09:10 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-2 on compute-2
Jan 21 11:09:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:10.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: Reconfiguring mon.compute-2 (unknown last config time)...
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 11:09:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:10 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:10.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 391 B/s rd, 0 op/s; 56 B/s, 2 objects/s recovering
Jan 21 11:09:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:11 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:11 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: Reconfiguring crash.compute-2 (unknown last config time)...
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: Reconfiguring daemon crash.compute-2 on compute-2
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:09:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:09:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:12.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:09:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:12 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:12.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:12 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.kdxyxe (monmap changed)...
Jan 21 11:09:12 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.kdxyxe (monmap changed)...
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdxyxe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdxyxe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:12 np0005590810 ceph-mgr[74671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.kdxyxe on compute-2
Jan 21 11:09:12 np0005590810 ceph-mgr[74671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.kdxyxe on compute-2
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: Reconfiguring mgr.compute-2.kdxyxe (monmap changed)...
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdxyxe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: Reconfiguring daemon mgr.compute-2.kdxyxe on compute-2
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: [prometheus INFO root] Restarting engine...
Jan 21 11:09:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:09:13] ENGINE Bus STOPPING
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:09:13] ENGINE Bus STOPPING
Jan 21 11:09:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:09:13] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:09:13] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 21 11:09:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:09:13] ENGINE Bus STOPPED
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:09:13] ENGINE Bus STOPPED
Jan 21 11:09:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:09:13] ENGINE Bus STARTING
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:09:13] ENGINE Bus STARTING
Jan 21 11:09:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:09:13] ENGINE Serving on http://:::9283
Jan 21 11:09:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: [21/Jan/2026:16:09:13] ENGINE Bus STARTED
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:09:13] ENGINE Serving on http://:::9283
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.error] [21/Jan/2026:16:09:13] ENGINE Bus STARTED
Jan 21 11:09:13 np0005590810 ceph-mgr[74671]: [prometheus INFO root] Engine started.
Jan 21 11:09:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:13 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:13 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:14 np0005590810 podman[105251]: 2026-01-21 16:09:14.088714316 +0000 UTC m=+0.065340721 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:09:14 np0005590810 podman[105251]: 2026-01-21 16:09:14.210580014 +0000 UTC m=+0.187206399 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 21 11:09:14 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:14 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:14 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 21 11:09:14 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:14.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:14 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:14.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:14 np0005590810 podman[105367]: 2026-01-21 16:09:14.691912924 +0000 UTC m=+0.066940052 container exec 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:09:14 np0005590810 podman[105367]: 2026-01-21 16:09:14.701612855 +0000 UTC m=+0.076639983 container exec_died 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:09:15 np0005590810 podman[105483]: 2026-01-21 16:09:15.102008809 +0000 UTC m=+0.117835554 container exec 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:09:15 np0005590810 podman[105483]: 2026-01-21 16:09:15.117710317 +0000 UTC m=+0.133537052 container exec_died 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:09:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Jan 21 11:09:15 np0005590810 podman[105546]: 2026-01-21 16:09:15.366364365 +0000 UTC m=+0.064800255 container exec 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:09:15 np0005590810 podman[105546]: 2026-01-21 16:09:15.374596331 +0000 UTC m=+0.073032191 container exec_died 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:09:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:15] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 21 11:09:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:15] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 21 11:09:15 np0005590810 podman[105611]: 2026-01-21 16:09:15.611367689 +0000 UTC m=+0.056966231 container exec e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, version=2.2.4)
Jan 21 11:09:15 np0005590810 podman[105611]: 2026-01-21 16:09:15.619724159 +0000 UTC m=+0.065322681 container exec_died e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, description=keepalived for Ceph, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, architecture=x86_64)
Jan 21 11:09:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:15 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:15 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e40002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:15 np0005590810 podman[105677]: 2026-01-21 16:09:15.842334398 +0000 UTC m=+0.055550758 container exec 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:09:15 np0005590810 podman[105677]: 2026-01-21 16:09:15.872877537 +0000 UTC m=+0.086093877 container exec_died 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:09:16 np0005590810 podman[105751]: 2026-01-21 16:09:16.079589761 +0000 UTC m=+0.050392117 container exec 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:16 np0005590810 podman[105751]: 2026-01-21 16:09:16.246286472 +0000 UTC m=+0.217088818 container exec_died 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:09:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:16.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:16 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:16.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:16 np0005590810 podman[105864]: 2026-01-21 16:09:16.61978378 +0000 UTC m=+0.059658655 container exec 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:09:16 np0005590810 podman[105864]: 2026-01-21 16:09:16.659812144 +0000 UTC m=+0.099686989 container exec_died 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:09:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 331 B/s rd, 0 op/s; 11 B/s, 0 objects/s recovering
Jan 21 11:09:17 np0005590810 podman[105999]: 2026-01-21 16:09:17.571472517 +0000 UTC m=+0.044889316 container create b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_pike, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 11:09:17 np0005590810 systemd[1]: Started libpod-conmon-b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816.scope.
Jan 21 11:09:17 np0005590810 podman[105999]: 2026-01-21 16:09:17.5525724 +0000 UTC m=+0.025989229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:09:17 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:09:17 np0005590810 podman[105999]: 2026-01-21 16:09:17.66679849 +0000 UTC m=+0.140215329 container init b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:09:17 np0005590810 podman[105999]: 2026-01-21 16:09:17.675191281 +0000 UTC m=+0.148608080 container start b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 21 11:09:17 np0005590810 podman[105999]: 2026-01-21 16:09:17.678524025 +0000 UTC m=+0.151940844 container attach b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_pike, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:09:17 np0005590810 eloquent_pike[106016]: 167 167
Jan 21 11:09:17 np0005590810 systemd[1]: libpod-b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816.scope: Deactivated successfully.
Jan 21 11:09:17 np0005590810 podman[105999]: 2026-01-21 16:09:17.680997241 +0000 UTC m=+0.154414070 container died b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_pike, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:09:17 np0005590810 systemd[1]: var-lib-containers-storage-overlay-908290ea839488b506161ee2b658dcaa72995597eeb9c2e9a63f88cd4839d7dc-merged.mount: Deactivated successfully.
Jan 21 11:09:17 np0005590810 podman[105999]: 2026-01-21 16:09:17.724115382 +0000 UTC m=+0.197532181 container remove b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_pike, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:09:17 np0005590810 systemd[1]: libpod-conmon-b907f3449862abd4747ffbd6e00c53fa351dff59fcb81741564fbe1c70184816.scope: Deactivated successfully.
Jan 21 11:09:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:17 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:17 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:09:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:17 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:17 np0005590810 podman[106043]: 2026-01-21 16:09:17.891986259 +0000 UTC m=+0.044485763 container create 45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_poincare, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 21 11:09:17 np0005590810 systemd[1]: Started libpod-conmon-45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a.scope.
Jan 21 11:09:17 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:09:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613d4e3651969579d89ce052bfd2a9ef766927bb157a4755f5ee2b3b4213a883/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:17 np0005590810 podman[106043]: 2026-01-21 16:09:17.874740203 +0000 UTC m=+0.027239727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:09:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613d4e3651969579d89ce052bfd2a9ef766927bb157a4755f5ee2b3b4213a883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613d4e3651969579d89ce052bfd2a9ef766927bb157a4755f5ee2b3b4213a883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613d4e3651969579d89ce052bfd2a9ef766927bb157a4755f5ee2b3b4213a883/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613d4e3651969579d89ce052bfd2a9ef766927bb157a4755f5ee2b3b4213a883/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:17 np0005590810 podman[106043]: 2026-01-21 16:09:17.983132301 +0000 UTC m=+0.135631835 container init 45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_poincare, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:09:17 np0005590810 podman[106043]: 2026-01-21 16:09:17.990551533 +0000 UTC m=+0.143051047 container start 45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:09:17 np0005590810 podman[106043]: 2026-01-21 16:09:17.995144285 +0000 UTC m=+0.147643789 container attach 45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 11:09:18 np0005590810 practical_poincare[106060]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:09:18 np0005590810 practical_poincare[106060]: --> All data devices are unavailable
Jan 21 11:09:18 np0005590810 systemd[1]: libpod-45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a.scope: Deactivated successfully.
Jan 21 11:09:18 np0005590810 conmon[106060]: conmon 45dd28ad5ac304da767b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a.scope/container/memory.events
Jan 21 11:09:18 np0005590810 podman[106043]: 2026-01-21 16:09:18.362951836 +0000 UTC m=+0.515451340 container died 45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:09:18 np0005590810 systemd[1]: var-lib-containers-storage-overlay-613d4e3651969579d89ce052bfd2a9ef766927bb157a4755f5ee2b3b4213a883-merged.mount: Deactivated successfully.
Jan 21 11:09:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:18.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:18 np0005590810 podman[106043]: 2026-01-21 16:09:18.408284495 +0000 UTC m=+0.560783989 container remove 45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_poincare, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:09:18 np0005590810 systemd[1]: libpod-conmon-45dd28ad5ac304da767b2b0a277010f57f0965784a00f22417e0fbf93ab7be7a.scope: Deactivated successfully.
Jan 21 11:09:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:18 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:18.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:18 np0005590810 podman[106181]: 2026-01-21 16:09:18.995296119 +0000 UTC m=+0.039806668 container create 5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Jan 21 11:09:19 np0005590810 systemd[1]: Started libpod-conmon-5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c.scope.
Jan 21 11:09:19 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:09:19 np0005590810 podman[106181]: 2026-01-21 16:09:18.978680973 +0000 UTC m=+0.023191542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:09:19 np0005590810 podman[106181]: 2026-01-21 16:09:19.092975575 +0000 UTC m=+0.137486144 container init 5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Jan 21 11:09:19 np0005590810 podman[106181]: 2026-01-21 16:09:19.103750869 +0000 UTC m=+0.148261418 container start 5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:09:19 np0005590810 podman[106181]: 2026-01-21 16:09:19.107457355 +0000 UTC m=+0.151967904 container attach 5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:09:19 np0005590810 determined_jang[106197]: 167 167
Jan 21 11:09:19 np0005590810 systemd[1]: libpod-5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c.scope: Deactivated successfully.
Jan 21 11:09:19 np0005590810 podman[106181]: 2026-01-21 16:09:19.112909894 +0000 UTC m=+0.157420463 container died 5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 21 11:09:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Jan 21 11:09:19 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b75a86546065ab4fd492edea0df37ac69ec436bbd0ac277e39098b7368f24cf8-merged.mount: Deactivated successfully.
Jan 21 11:09:19 np0005590810 podman[106181]: 2026-01-21 16:09:19.18970195 +0000 UTC m=+0.234212499 container remove 5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_jang, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 11:09:19 np0005590810 systemd[1]: libpod-conmon-5198d683f5623898ec0fcc549f58c1a0d43f326474b746e3cc5c2204b5087f9c.scope: Deactivated successfully.
Jan 21 11:09:19 np0005590810 podman[106222]: 2026-01-21 16:09:19.373321338 +0000 UTC m=+0.063203556 container create 774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:09:19 np0005590810 systemd[1]: Started libpod-conmon-774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5.scope.
Jan 21 11:09:19 np0005590810 podman[106222]: 2026-01-21 16:09:19.349418424 +0000 UTC m=+0.039300692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:09:19 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:09:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe541a419fda26d30db3b66a431e55475d639462c7846ef3cb5231936a80854/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe541a419fda26d30db3b66a431e55475d639462c7846ef3cb5231936a80854/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe541a419fda26d30db3b66a431e55475d639462c7846ef3cb5231936a80854/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe541a419fda26d30db3b66a431e55475d639462c7846ef3cb5231936a80854/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:19 np0005590810 podman[106222]: 2026-01-21 16:09:19.467323559 +0000 UTC m=+0.157205777 container init 774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:09:19 np0005590810 podman[106222]: 2026-01-21 16:09:19.474648797 +0000 UTC m=+0.164531035 container start 774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:09:19 np0005590810 podman[106222]: 2026-01-21 16:09:19.478353581 +0000 UTC m=+0.168235819 container attach 774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:09:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:19 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]: {
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:    "0": [
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:        {
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "devices": [
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "/dev/loop3"
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            ],
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "lv_name": "ceph_lv0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "lv_size": "21470642176",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "name": "ceph_lv0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "tags": {
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.cluster_name": "ceph",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.crush_device_class": "",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.encrypted": "0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.osd_id": "0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.type": "block",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.vdo": "0",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:                "ceph.with_tpm": "0"
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            },
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "type": "block",
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:            "vg_name": "ceph_vg0"
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:        }
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]:    ]
Jan 21 11:09:19 np0005590810 reverent_albattani[106239]: }
Jan 21 11:09:19 np0005590810 systemd[1]: libpod-774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5.scope: Deactivated successfully.
Jan 21 11:09:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:19 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:19 np0005590810 podman[106248]: 2026-01-21 16:09:19.860396375 +0000 UTC m=+0.025002378 container died 774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:09:19 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6fe541a419fda26d30db3b66a431e55475d639462c7846ef3cb5231936a80854-merged.mount: Deactivated successfully.
Jan 21 11:09:19 np0005590810 podman[106248]: 2026-01-21 16:09:19.912436612 +0000 UTC m=+0.077042575 container remove 774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_albattani, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 21 11:09:19 np0005590810 systemd[1]: libpod-conmon-774fb0b4d4d78d525926ff906a814719ef11a3641f49cff97f5074c5d4b835c5.scope: Deactivated successfully.
Jan 21 11:09:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:20.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:20 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:20.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:20 np0005590810 podman[106354]: 2026-01-21 16:09:20.643045309 +0000 UTC m=+0.051670616 container create bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:09:20 np0005590810 systemd[1]: Started libpod-conmon-bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51.scope.
Jan 21 11:09:20 np0005590810 podman[106354]: 2026-01-21 16:09:20.620927332 +0000 UTC m=+0.029552699 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:09:20 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:09:20 np0005590810 podman[106354]: 2026-01-21 16:09:20.733370636 +0000 UTC m=+0.141995953 container init bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:09:20 np0005590810 podman[106354]: 2026-01-21 16:09:20.740692204 +0000 UTC m=+0.149317521 container start bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_lalande, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:09:20 np0005590810 podman[106354]: 2026-01-21 16:09:20.745281137 +0000 UTC m=+0.153906454 container attach bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:09:20 np0005590810 brave_lalande[106372]: 167 167
Jan 21 11:09:20 np0005590810 systemd[1]: libpod-bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51.scope: Deactivated successfully.
Jan 21 11:09:20 np0005590810 podman[106354]: 2026-01-21 16:09:20.74795934 +0000 UTC m=+0.156584687 container died bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Jan 21 11:09:20 np0005590810 systemd[1]: var-lib-containers-storage-overlay-965b9ea72e211514984b4b4f7f2dd6db0f143e5ee3d07e84688c78b00160721a-merged.mount: Deactivated successfully.
Jan 21 11:09:20 np0005590810 podman[106354]: 2026-01-21 16:09:20.794519927 +0000 UTC m=+0.203145234 container remove bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:09:20 np0005590810 systemd[1]: libpod-conmon-bd7bd423bf17f162e755ce895f1d62ffaf87ecfbad4d93e781083561e20fcd51.scope: Deactivated successfully.
Jan 21 11:09:21 np0005590810 podman[106397]: 2026-01-21 16:09:21.007666301 +0000 UTC m=+0.053661309 container create 5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_volhard, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:09:21 np0005590810 systemd[1]: Started libpod-conmon-5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd.scope.
Jan 21 11:09:21 np0005590810 podman[106397]: 2026-01-21 16:09:20.984330486 +0000 UTC m=+0.030325514 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:09:21 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:09:21 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a342055a038ff7c22a5d80b06b8eb250146f701d0a10a0e3d13c3c54ee4fb6fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:21 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a342055a038ff7c22a5d80b06b8eb250146f701d0a10a0e3d13c3c54ee4fb6fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:21 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a342055a038ff7c22a5d80b06b8eb250146f701d0a10a0e3d13c3c54ee4fb6fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:21 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a342055a038ff7c22a5d80b06b8eb250146f701d0a10a0e3d13c3c54ee4fb6fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:21 np0005590810 podman[106397]: 2026-01-21 16:09:21.099377041 +0000 UTC m=+0.145372039 container init 5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 21 11:09:21 np0005590810 podman[106397]: 2026-01-21 16:09:21.106697379 +0000 UTC m=+0.152692367 container start 5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:09:21 np0005590810 podman[106397]: 2026-01-21 16:09:21.110119305 +0000 UTC m=+0.156114303 container attach 5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 21 11:09:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Jan 21 11:09:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:21 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:21 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:21 np0005590810 lvm[106490]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:09:21 np0005590810 lvm[106490]: VG ceph_vg0 finished
Jan 21 11:09:21 np0005590810 exciting_volhard[106415]: {}
Jan 21 11:09:21 np0005590810 systemd[1]: libpod-5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd.scope: Deactivated successfully.
Jan 21 11:09:21 np0005590810 systemd[1]: libpod-5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd.scope: Consumed 1.324s CPU time.
Jan 21 11:09:21 np0005590810 podman[106397]: 2026-01-21 16:09:21.957044777 +0000 UTC m=+1.003039795 container died 5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Jan 21 11:09:21 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a342055a038ff7c22a5d80b06b8eb250146f701d0a10a0e3d13c3c54ee4fb6fd-merged.mount: Deactivated successfully.
Jan 21 11:09:22 np0005590810 podman[106397]: 2026-01-21 16:09:22.003704917 +0000 UTC m=+1.049699915 container remove 5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_volhard, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 11:09:22 np0005590810 systemd[1]: libpod-conmon-5e233aa30027110a3489a97bc2ad889c31161e6bc890e3656b07ec409e31dcbd.scope: Deactivated successfully.
Jan 21 11:09:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:09:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:09:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:09:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:22.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:22 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:22.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:23 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:23 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:09:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:09:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:24.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:24 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:24.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:25] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Jan 21 11:09:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:25] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Jan 21 11:09:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:25 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:25 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:26.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:26 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:26.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:27 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:27 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:28.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:28 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:28.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:29 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:29 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:30.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:30 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:30.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:09:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:31 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:31 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:32.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:32 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:32.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:33 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:34.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:34 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e58002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:35] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Jan 21 11:09:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:35] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Jan 21 11:09:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:35 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:35 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:36.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:36 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:37 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:37 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e2c004660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:38.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:38 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e4c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:38.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:09:39
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.data', '.mgr', '.nfs', 'volumes']
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:09:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:09:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:09:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:09:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:09:39 np0005590810 kernel: ganesha.nfsd[102404]: segfault at 50 ip 00007f4edb94032e sp 00007f4e3fffe210 error 4 in libntirpc.so.5.8[7f4edb925000+2c000] likely on CPU 5 (core 0, socket 5)
Jan 21 11:09:39 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:09:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[100404]: 21/01/2026 16:09:39 : epoch 6970fa01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4e34004050 fd 38 proxy ignored for local
Jan 21 11:09:39 np0005590810 systemd[1]: Started Process Core Dump (PID 106621/UID 0).
Jan 21 11:09:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:09:42 np0005590810 systemd-coredump[106622]: Process 100425 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007f4edb94032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007f4edb94a900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Jan 21 11:09:42 np0005590810 systemd[1]: systemd-coredump@2-106621-0.service: Deactivated successfully.
Jan 21 11:09:42 np0005590810 systemd[1]: systemd-coredump@2-106621-0.service: Consumed 2.191s CPU time.
Jan 21 11:09:42 np0005590810 podman[106629]: 2026-01-21 16:09:42.203111872 +0000 UTC m=+0.025655616 container died 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 21 11:09:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8dc0fffd69588edb52b21ab00fd2434294dd3ff0b497f772bb7dbfb44bf33e37-merged.mount: Deactivated successfully.
Jan 21 11:09:42 np0005590810 podman[106629]: 2026-01-21 16:09:42.254627108 +0000 UTC m=+0.077170842 container remove 1851d1962129885d967a85b3c141d64f2256d7ce1d09e8b7f2c8a12b067da1c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:09:42 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:09:42 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:09:42 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 2.440s CPU time.
Jan 21 11:09:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:42.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:44.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:44.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:45] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 21 11:09:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:45] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 21 11:09:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:46.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:46.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=infra.usagestats t=2026-01-21T16:09:46.617436947Z level=info msg="Usage stats are ready to report"
Jan 21 11:09:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/160947 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:09:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:48.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:48.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:09:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:50.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:50.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:09:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:52.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:52.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:52 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 3.
Jan 21 11:09:52 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:09:52 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 2.440s CPU time.
Jan 21 11:09:52 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:09:52 np0005590810 podman[106757]: 2026-01-21 16:09:52.842676695 +0000 UTC m=+0.046697244 container create 183fce5b37958e09aaaa8f5501c79b2219f76131ce3829517233daa26012bbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 11:09:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b5886d2179bb00101b03dca385047336456e8799f0b4a1c29ad3d81ba988f0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b5886d2179bb00101b03dca385047336456e8799f0b4a1c29ad3d81ba988f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b5886d2179bb00101b03dca385047336456e8799f0b4a1c29ad3d81ba988f0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b5886d2179bb00101b03dca385047336456e8799f0b4a1c29ad3d81ba988f0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:09:52 np0005590810 podman[106757]: 2026-01-21 16:09:52.823604259 +0000 UTC m=+0.027624828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:09:53 np0005590810 podman[106757]: 2026-01-21 16:09:53.082929612 +0000 UTC m=+0.286950251 container init 183fce5b37958e09aaaa8f5501c79b2219f76131ce3829517233daa26012bbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:09:53 np0005590810 podman[106757]: 2026-01-21 16:09:53.089571263 +0000 UTC m=+0.293591812 container start 183fce5b37958e09aaaa8f5501c79b2219f76131ce3829517233daa26012bbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 11:09:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:09:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:09:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:09:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:09:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:09:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:09:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:09:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:09:53 np0005590810 bash[106757]: 183fce5b37958e09aaaa8f5501c79b2219f76131ce3829517233daa26012bbb0
Jan 21 11:09:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:09:53 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:09:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:09:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:09:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:09:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/160955 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:09:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:55] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 21 11:09:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:09:55] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 21 11:09:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:56.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:56.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:56 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:58958] [POST] [200] [0.001s] [4.0B] [e36a9cb8-d89c-46c1-9f27-c33d2c64dba7] /api/prometheus_receiver
Jan 21 11:09:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:09:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:09:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:09:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:09:58.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:09:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:09:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:09:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:09:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:09:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:09:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:59 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:09:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:59 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:09:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:09:59 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:10:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 21 11:10:00 np0005590810 ceph-mon[74380]: overall HEALTH_OK
Jan 21 11:10:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:00.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:00.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:10:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:02 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:10:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:02 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:10:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:02 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:10:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:10:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:02.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:10:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:02.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:10:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:04.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:04.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:10:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:05] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 21 11:10:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:05] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 21 11:10:05 np0005590810 python3.9[107004]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:10:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:06.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:10:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:06.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:10:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:10:06.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:10:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:10:07 np0005590810 python3.9[107293]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 21 11:10:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:08.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:10:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:10:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:10:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:08.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:10:08 np0005590810 python3.9[107459]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 21 11:10:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:10:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f895066c3a0>)]
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8950666100>)]
Jan 21 11:10:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 11:10:09 np0005590810 python3.9[107613]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:10:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:09 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dc0000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:09 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:10 np0005590810 python3.9[107765]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 21 11:10:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:10 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:10.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Jan 21 11:10:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:11 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da40016e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161011 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:10:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:11 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0000f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:11 np0005590810 python3.9[107919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:10:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:12 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:10:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:12 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:10:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:10:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:12.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:10:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:12 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:12.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:12 np0005590810 python3.9[108071]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:10:13 np0005590810 python3.9[108150]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:10:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Jan 21 11:10:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:13 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.ygffhs(active, since 94s), standbys: compute-2.kdxyxe, compute-1.oewgcf
Jan 21 11:10:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:13 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:13 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0000f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:14 np0005590810 python3.9[108303]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:10:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:14.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:14 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:14.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Jan 21 11:10:15 np0005590810 python3.9[108484]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 21 11:10:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:15] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Jan 21 11:10:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:15] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Jan 21 11:10:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:15 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:15 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:16 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:10:16 np0005590810 python3.9[108637]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 21 11:10:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:16.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:16 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0001f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:16.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:10:16.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:10:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:10:16.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:10:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:10:17 np0005590810 python3.9[108792]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 11:10:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:17 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:17 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:18 np0005590810 python3.9[108944]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 21 11:10:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:18.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:18 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:18.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:10:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161019 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:10:19 np0005590810 python3.9[109098]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:10:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:19 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0001f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:19 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:20.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:20 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:10:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:20.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:10:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.595834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011821596484, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1578, "num_deletes": 251, "total_data_size": 3583473, "memory_usage": 3670392, "flush_reason": "Manual Compaction"}
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011821702652, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 3289937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9218, "largest_seqno": 10795, "table_properties": {"data_size": 3282313, "index_size": 4502, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16759, "raw_average_key_size": 20, "raw_value_size": 3266502, "raw_average_value_size": 4003, "num_data_blocks": 203, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011729, "oldest_key_time": 1769011729, "file_creation_time": 1769011821, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 106849 microseconds, and 14659 cpu microseconds.
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.702705) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 3289937 bytes OK
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.702724) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.705450) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.705466) EVENT_LOG_v1 {"time_micros": 1769011821705462, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.705483) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 3576340, prev total WAL file size 3576340, number of live WAL files 2.
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.706437) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(3212KB)], [23(11MB)]
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011821706465, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 14956731, "oldest_snapshot_seqno": -1}
Jan 21 11:10:21 np0005590810 python3.9[109253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:10:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:21 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4009 keys, 12566753 bytes, temperature: kUnknown
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011821867626, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 12566753, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12533975, "index_size": 21660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 102297, "raw_average_key_size": 25, "raw_value_size": 12454645, "raw_average_value_size": 3106, "num_data_blocks": 930, "num_entries": 4009, "num_filter_entries": 4009, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769011821, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:10:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:21 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0001f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.867857) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 12566753 bytes
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.882381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.8 rd, 77.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 11.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 4539, records dropped: 530 output_compression: NoCompression
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.882421) EVENT_LOG_v1 {"time_micros": 1769011821882407, "job": 8, "event": "compaction_finished", "compaction_time_micros": 161239, "compaction_time_cpu_micros": 25943, "output_level": 6, "num_output_files": 1, "total_output_size": 12566753, "num_input_records": 4539, "num_output_records": 4009, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011821882996, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011821885178, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.706359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.885288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.885295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.885297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.885300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:10:21 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:10:21.885302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:10:22 np0005590810 python3.9[109405]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:10:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:22.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:22 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:10:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:22.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:10:22 np0005590810 python3.9[109568]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:10:22 np0005590810 podman[109607]: 2026-01-21 16:10:22.985076347 +0000 UTC m=+0.065287574 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 21 11:10:23 np0005590810 podman[109607]: 2026-01-21 16:10:23.085491335 +0000 UTC m=+0.165702522 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:10:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Jan 21 11:10:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:23 np0005590810 podman[109845]: 2026-01-21 16:10:23.515793606 +0000 UTC m=+0.055320337 container exec 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:10:23 np0005590810 podman[109845]: 2026-01-21 16:10:23.52566809 +0000 UTC m=+0.065194801 container exec_died 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:10:23 np0005590810 python3.9[109909]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:10:23 np0005590810 podman[109973]: 2026-01-21 16:10:23.842715225 +0000 UTC m=+0.062252277 container exec 183fce5b37958e09aaaa8f5501c79b2219f76131ce3829517233daa26012bbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:10:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:23 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:23 np0005590810 podman[109973]: 2026-01-21 16:10:23.857631579 +0000 UTC m=+0.077168601 container exec_died 183fce5b37958e09aaaa8f5501c79b2219f76131ce3829517233daa26012bbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:10:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:23 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:24 np0005590810 podman[110113]: 2026-01-21 16:10:24.060003854 +0000 UTC m=+0.054256434 container exec 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:10:24 np0005590810 podman[110113]: 2026-01-21 16:10:24.067273324 +0000 UTC m=+0.061525874 container exec_died 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:10:24 np0005590810 python3.9[110098]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:10:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:10:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:10:24 np0005590810 podman[110180]: 2026-01-21 16:10:24.253807567 +0000 UTC m=+0.050238256 container exec e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, version=2.2.4, io.openshift.expose-services=, vcs-type=git)
Jan 21 11:10:24 np0005590810 podman[110200]: 2026-01-21 16:10:24.328419995 +0000 UTC m=+0.053852160 container exec_died e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vendor=Red Hat, Inc., name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20)
Jan 21 11:10:24 np0005590810 podman[110180]: 2026-01-21 16:10:24.334783608 +0000 UTC m=+0.131214277 container exec_died e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Jan 21 11:10:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:24.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:24 np0005590810 podman[110269]: 2026-01-21 16:10:24.518797959 +0000 UTC m=+0.050258306 container exec 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:10:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:24 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003330 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:24 np0005590810 podman[110269]: 2026-01-21 16:10:24.574734835 +0000 UTC m=+0.106195202 container exec_died 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:10:24 np0005590810 podman[110370]: 2026-01-21 16:10:24.765697128 +0000 UTC m=+0.052806407 container exec 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:10:24 np0005590810 podman[110370]: 2026-01-21 16:10:24.953058296 +0000 UTC m=+0.240167575 container exec_died 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:10:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Jan 21 11:10:25 np0005590810 python3.9[110516]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:10:25 np0005590810 podman[110586]: 2026-01-21 16:10:25.329033392 +0000 UTC m=+0.077284994 container exec 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:10:25 np0005590810 podman[110586]: 2026-01-21 16:10:25.374263248 +0000 UTC m=+0.122514830 container exec_died 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:10:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:10:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:10:25 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:25] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:10:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:25] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:10:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:25 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:25 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:26 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:10:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:26.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:26 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:26 np0005590810 podman[110796]: 2026-01-21 16:10:26.570671062 +0000 UTC m=+0.033888637 container create 9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:10:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:26.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:26 np0005590810 systemd[1]: Started libpod-conmon-9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef.scope.
Jan 21 11:10:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:10:26 np0005590810 podman[110796]: 2026-01-21 16:10:26.645822027 +0000 UTC m=+0.109039612 container init 9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:10:26 np0005590810 podman[110796]: 2026-01-21 16:10:26.556319196 +0000 UTC m=+0.019536791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:10:26 np0005590810 podman[110796]: 2026-01-21 16:10:26.65219071 +0000 UTC m=+0.115408275 container start 9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:10:26 np0005590810 upbeat_noether[110813]: 167 167
Jan 21 11:10:26 np0005590810 systemd[1]: libpod-9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef.scope: Deactivated successfully.
Jan 21 11:10:26 np0005590810 podman[110796]: 2026-01-21 16:10:26.657067864 +0000 UTC m=+0.120285459 container attach 9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:10:26 np0005590810 podman[110796]: 2026-01-21 16:10:26.658080127 +0000 UTC m=+0.121297692 container died 9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:10:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-caaa9c7a33bcb370a7e9a9d8fe59598e5c64cf077e8be4790bfe34f7fe103483-merged.mount: Deactivated successfully.
Jan 21 11:10:26 np0005590810 podman[110796]: 2026-01-21 16:10:26.71046436 +0000 UTC m=+0.173681935 container remove 9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:10:26 np0005590810 systemd[1]: libpod-conmon-9246b5981e7c2aca7a2200127e21657cd85ff38c851acf7ed373e1f1f6fa82ef.scope: Deactivated successfully.
Jan 21 11:10:26 np0005590810 podman[110864]: 2026-01-21 16:10:26.852337474 +0000 UTC m=+0.039708341 container create ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:10:26 np0005590810 systemd[1]: Started libpod-conmon-ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8.scope.
Jan 21 11:10:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:10:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013a937235839b66cc66a5f837ac4670c8289f8b0806e7f38ee76904ff7dd63a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013a937235839b66cc66a5f837ac4670c8289f8b0806e7f38ee76904ff7dd63a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013a937235839b66cc66a5f837ac4670c8289f8b0806e7f38ee76904ff7dd63a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013a937235839b66cc66a5f837ac4670c8289f8b0806e7f38ee76904ff7dd63a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013a937235839b66cc66a5f837ac4670c8289f8b0806e7f38ee76904ff7dd63a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:26 np0005590810 podman[110864]: 2026-01-21 16:10:26.836222152 +0000 UTC m=+0.023593029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:10:26 np0005590810 podman[110864]: 2026-01-21 16:10:26.936870528 +0000 UTC m=+0.124241415 container init ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 11:10:26 np0005590810 podman[110864]: 2026-01-21 16:10:26.943019033 +0000 UTC m=+0.130389880 container start ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:10:26 np0005590810 podman[110864]: 2026-01-21 16:10:26.946046849 +0000 UTC m=+0.133417716 container attach ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:10:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:10:26.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:10:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:10:27 np0005590810 ecstatic_jones[110880]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:10:27 np0005590810 ecstatic_jones[110880]: --> All data devices are unavailable
Jan 21 11:10:27 np0005590810 systemd[1]: libpod-ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8.scope: Deactivated successfully.
Jan 21 11:10:27 np0005590810 podman[110864]: 2026-01-21 16:10:27.30483605 +0000 UTC m=+0.492206897 container died ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 21 11:10:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-013a937235839b66cc66a5f837ac4670c8289f8b0806e7f38ee76904ff7dd63a-merged.mount: Deactivated successfully.
Jan 21 11:10:27 np0005590810 podman[110864]: 2026-01-21 16:10:27.421914017 +0000 UTC m=+0.609284874 container remove ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:10:27 np0005590810 systemd[1]: libpod-conmon-ac378987793c3e2d913bf77c646ff1b5676d2b95a5c05b7b7d291ec0f5b5b0f8.scope: Deactivated successfully.
Jan 21 11:10:27 np0005590810 python3.9[111056]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:10:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:27 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003330 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:27 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:28 np0005590810 podman[111178]: 2026-01-21 16:10:27.960087283 +0000 UTC m=+0.020441670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:10:28 np0005590810 podman[111178]: 2026-01-21 16:10:28.206825957 +0000 UTC m=+0.267180314 container create d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_morse, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:10:28 np0005590810 systemd[1]: Started libpod-conmon-d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c.scope.
Jan 21 11:10:28 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:10:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:28 np0005590810 podman[111178]: 2026-01-21 16:10:28.436675983 +0000 UTC m=+0.497030360 container init d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 11:10:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:10:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:28.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:10:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:28 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:28 np0005590810 podman[111178]: 2026-01-21 16:10:28.560754942 +0000 UTC m=+0.621109289 container start d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:10:28 np0005590810 serene_morse[111290]: 167 167
Jan 21 11:10:28 np0005590810 podman[111178]: 2026-01-21 16:10:28.566080831 +0000 UTC m=+0.626435188 container attach d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_morse, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:10:28 np0005590810 systemd[1]: libpod-d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c.scope: Deactivated successfully.
Jan 21 11:10:28 np0005590810 conmon[111290]: conmon d46a611c1a76dc8d98e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c.scope/container/memory.events
Jan 21 11:10:28 np0005590810 podman[111178]: 2026-01-21 16:10:28.568406235 +0000 UTC m=+0.628760592 container died d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 21 11:10:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:28 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6f94b43a1e9c4269cec392f6287b0699fd2d2580a8e57c7c67f23add3deff17d-merged.mount: Deactivated successfully.
Jan 21 11:10:28 np0005590810 python3.9[111287]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 21 11:10:28 np0005590810 podman[111178]: 2026-01-21 16:10:28.606731063 +0000 UTC m=+0.667085420 container remove d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 11:10:28 np0005590810 systemd[1]: libpod-conmon-d46a611c1a76dc8d98e877be3dbc98133198014ff1c080ed236b8434551c515c.scope: Deactivated successfully.
Jan 21 11:10:28 np0005590810 podman[111316]: 2026-01-21 16:10:28.759628066 +0000 UTC m=+0.041914531 container create 6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:10:28 np0005590810 podman[111316]: 2026-01-21 16:10:28.740307593 +0000 UTC m=+0.022594078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:10:28 np0005590810 systemd[1]: Started libpod-conmon-6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856.scope.
Jan 21 11:10:29 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:10:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6606b37afc1c224217ba9615827b6b06964c1f24bc893965936c1faf1642c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6606b37afc1c224217ba9615827b6b06964c1f24bc893965936c1faf1642c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6606b37afc1c224217ba9615827b6b06964c1f24bc893965936c1faf1642c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6606b37afc1c224217ba9615827b6b06964c1f24bc893965936c1faf1642c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:10:29 np0005590810 python3.9[111486]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:10:29 np0005590810 podman[111316]: 2026-01-21 16:10:29.518098176 +0000 UTC m=+0.800384661 container init 6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 11:10:29 np0005590810 podman[111316]: 2026-01-21 16:10:29.528750344 +0000 UTC m=+0.811036799 container start 6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:10:29 np0005590810 podman[111316]: 2026-01-21 16:10:29.595172893 +0000 UTC m=+0.877459358 container attach 6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]: {
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:    "0": [
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:        {
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "devices": [
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "/dev/loop3"
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            ],
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "lv_name": "ceph_lv0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "lv_size": "21470642176",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "name": "ceph_lv0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "tags": {
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.cluster_name": "ceph",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.crush_device_class": "",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.encrypted": "0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.osd_id": "0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.type": "block",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.vdo": "0",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:                "ceph.with_tpm": "0"
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            },
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "type": "block",
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:            "vg_name": "ceph_vg0"
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:        }
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]:    ]
Jan 21 11:10:29 np0005590810 sharp_stonebraker[111380]: }
Jan 21 11:10:29 np0005590810 systemd[1]: libpod-6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856.scope: Deactivated successfully.
Jan 21 11:10:29 np0005590810 podman[111316]: 2026-01-21 16:10:29.849058963 +0000 UTC m=+1.131345428 container died 6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:10:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:29 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:29 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003330 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:30 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ed6606b37afc1c224217ba9615827b6b06964c1f24bc893965936c1faf1642c4-merged.mount: Deactivated successfully.
Jan 21 11:10:30 np0005590810 podman[111316]: 2026-01-21 16:10:30.109911095 +0000 UTC m=+1.392197560 container remove 6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:10:30 np0005590810 systemd[1]: libpod-conmon-6ad45c5a85d7c8c90a56313710d4a5d74e15d45c02cb2760d64576b8a97f8856.scope: Deactivated successfully.
Jan 21 11:10:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:30 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:10:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:10:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:30.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:30 np0005590810 podman[111676]: 2026-01-21 16:10:30.629026665 +0000 UTC m=+0.042975494 container create 3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:10:30 np0005590810 systemd[1]: Started libpod-conmon-3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397.scope.
Jan 21 11:10:30 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:10:30 np0005590810 podman[111676]: 2026-01-21 16:10:30.61059111 +0000 UTC m=+0.024539969 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:10:30 np0005590810 podman[111676]: 2026-01-21 16:10:30.718805241 +0000 UTC m=+0.132754090 container init 3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kepler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:10:30 np0005590810 podman[111676]: 2026-01-21 16:10:30.727493739 +0000 UTC m=+0.141442568 container start 3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:10:30 np0005590810 silly_kepler[111712]: 167 167
Jan 21 11:10:30 np0005590810 podman[111676]: 2026-01-21 16:10:30.731587946 +0000 UTC m=+0.145536875 container attach 3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 21 11:10:30 np0005590810 systemd[1]: libpod-3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397.scope: Deactivated successfully.
Jan 21 11:10:30 np0005590810 podman[111676]: 2026-01-21 16:10:30.732718703 +0000 UTC m=+0.146667542 container died 3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 11:10:30 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9e2ef600c39cdb106909ff631b75367d5ef042381977a3cfafa53cbf205623be-merged.mount: Deactivated successfully.
Jan 21 11:10:30 np0005590810 podman[111676]: 2026-01-21 16:10:30.772015556 +0000 UTC m=+0.185964385 container remove 3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:10:30 np0005590810 systemd[1]: libpod-conmon-3fb2a118cd1f6341386c1d43ab640954e566345e0b0c376d6cb6b6c26a4b0397.scope: Deactivated successfully.
Jan 21 11:10:30 np0005590810 podman[111738]: 2026-01-21 16:10:30.947433967 +0000 UTC m=+0.071469363 container create 34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:10:30 np0005590810 podman[111738]: 2026-01-21 16:10:30.899092743 +0000 UTC m=+0.023128159 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:10:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:10:31 np0005590810 systemd[1]: Started libpod-conmon-34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178.scope.
Jan 21 11:10:31 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:10:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64963552df4740e45e60112e78cf4afce249dddbce0fdd048a04a5aee967ce9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64963552df4740e45e60112e78cf4afce249dddbce0fdd048a04a5aee967ce9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64963552df4740e45e60112e78cf4afce249dddbce0fdd048a04a5aee967ce9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64963552df4740e45e60112e78cf4afce249dddbce0fdd048a04a5aee967ce9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:10:31 np0005590810 podman[111738]: 2026-01-21 16:10:31.247384168 +0000 UTC m=+0.371419584 container init 34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:10:31 np0005590810 podman[111738]: 2026-01-21 16:10:31.255183527 +0000 UTC m=+0.379218963 container start 34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:10:31 np0005590810 python3.9[111804]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:10:31 np0005590810 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 21 11:10:31 np0005590810 systemd[1]: tuned.service: Deactivated successfully.
Jan 21 11:10:31 np0005590810 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 21 11:10:31 np0005590810 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 11:10:31 np0005590810 podman[111738]: 2026-01-21 16:10:31.455601796 +0000 UTC m=+0.579637212 container attach 34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 11:10:31 np0005590810 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 11:10:31 np0005590810 lvm[111917]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:10:31 np0005590810 lvm[111917]: VG ceph_vg0 finished
Jan 21 11:10:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:31 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:31 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:31 np0005590810 eager_heisenberg[111808]: {}
Jan 21 11:10:31 np0005590810 systemd[1]: libpod-34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178.scope: Deactivated successfully.
Jan 21 11:10:31 np0005590810 systemd[1]: libpod-34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178.scope: Consumed 1.091s CPU time.
Jan 21 11:10:31 np0005590810 podman[111738]: 2026-01-21 16:10:31.946485133 +0000 UTC m=+1.070520529 container died 34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Jan 21 11:10:31 np0005590810 systemd[1]: var-lib-containers-storage-overlay-e64963552df4740e45e60112e78cf4afce249dddbce0fdd048a04a5aee967ce9-merged.mount: Deactivated successfully.
Jan 21 11:10:32 np0005590810 podman[111738]: 2026-01-21 16:10:32.001907632 +0000 UTC m=+1.125943028 container remove 34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:10:32 np0005590810 systemd[1]: libpod-conmon-34dba243c64991b9737bb87513459c1e1c319d7c345fb5fa8234d79a6ed71178.scope: Deactivated successfully.
Jan 21 11:10:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:10:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:10:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:32 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0003330 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:10:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:10:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:10:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:10:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:33 np0005590810 python3.9[112083]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 21 11:10:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:33 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:33 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:34 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:10:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:10:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:10:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:35] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:10:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:35] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:10:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:35 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:35 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:36 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:36.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:10:36.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:10:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:10:36.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:10:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:10:37 np0005590810 python3.9[112263]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:10:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:37 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:37 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:38 np0005590810 python3.9[112418]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:10:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:38 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19171c25d0 =====
Jan 21 11:10:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:38.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19171c25d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:10:38 np0005590810 radosgw[94128]: beast: 0x7f19171c25d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:38.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:10:38 np0005590810 systemd[1]: session-38.scope: Deactivated successfully.
Jan 21 11:10:38 np0005590810 systemd[1]: session-38.scope: Consumed 1min 8.791s CPU time.
Jan 21 11:10:38 np0005590810 systemd-logind[795]: Session 38 logged out. Waiting for processes to exit.
Jan 21 11:10:38 np0005590810 systemd-logind[795]: Removed session 38.
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:10:39
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.nfs', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'volumes']
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:10:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:10:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:10:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:10:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:39 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:39 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:40 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c000e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:40.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.002000067s ======
Jan 21 11:10:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:40.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000067s
Jan 21 11:10:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:10:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:41 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:41 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:42 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:42.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:10:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:42.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:10:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:10:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:43 np0005590810 systemd-logind[795]: New session 40 of user zuul.
Jan 21 11:10:43 np0005590810 systemd[1]: Started Session 40 of User zuul.
Jan 21 11:10:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:43 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c001940 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:43 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:44 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:10:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:44.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:10:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:44.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:44 np0005590810 python3.9[112606]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:10:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:10:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:45] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Jan 21 11:10:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:45] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Jan 21 11:10:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:45 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:45 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c001940 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:46 np0005590810 python3.9[112764]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 21 11:10:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:46 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:46.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:10:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:46.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:10:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:10:46.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:10:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:10:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161047 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:10:47 np0005590810 python3.9[112918]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:10:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:47 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:47 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:48 np0005590810 python3.9[113003]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 11:10:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:48 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c001940 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:48.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:10:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:48.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:10:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:10:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:49 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:49 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:50 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:10:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:50.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:10:50 np0005590810 python3.9[113158]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:10:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:10:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:50.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:10:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:10:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:51 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:51 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:52 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:52.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:52.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:53 np0005590810 python3.9[113314]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 11:10:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:10:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c002db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:10:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:10:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:54 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:54.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:54.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:54 np0005590810 python3.9[113469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:10:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:10:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:55] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:10:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:10:55] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:10:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:55 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:55 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:56 np0005590810 python3.9[113647]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 21 11:10:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:56 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:56.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:10:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:56.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:10:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:10:56.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:10:57 np0005590810 python3.9[113798]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:10:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:57 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:10:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:10:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:57 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:57 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:58 np0005590810 python3.9[113957]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:10:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:10:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:58 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:10:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:10:58.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:10:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:10:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:10:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:10:58.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:10:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:10:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:59 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:10:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:10:59 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:00 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:11:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:00 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:11:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:00 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:11:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:00 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:00.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:11:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:11:00 np0005590810 python3.9[114112]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:11:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:11:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:01 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:01 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:02 np0005590810 python3.9[114401]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 21 11:11:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:02 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:02.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:11:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:03 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:11:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:03 np0005590810 python3.9[114553]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:11:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:03 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:03 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:04 np0005590810 python3.9[114707]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:11:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:04 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:04.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:11:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:05] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:11:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:05] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:11:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:05 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:05 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac0021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:06 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:06.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:06.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:06.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:11:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:11:07 np0005590810 python3.9[114863]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:11:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:07 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:07 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:08.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:11:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:11:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:11:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:11:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:11:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161109 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:11:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:11:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:11:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:11:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:11:09 np0005590810 python3.9[115019]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:11:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:09 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:09 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:10 np0005590810 python3.9[115174]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 21 11:11:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:10 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00022d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:11:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:10.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:11:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:11:11 np0005590810 systemd[1]: session-40.scope: Deactivated successfully.
Jan 21 11:11:11 np0005590810 systemd[1]: session-40.scope: Consumed 18.474s CPU time.
Jan 21 11:11:11 np0005590810 systemd-logind[795]: Session 40 logged out. Waiting for processes to exit.
Jan 21 11:11:11 np0005590810 systemd-logind[795]: Removed session 40.
Jan 21 11:11:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:11 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:11 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:12 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:12.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:11:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:13 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002360 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:13 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:14 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db40013a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:14.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:14.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 424 B/s wr, 1 op/s
Jan 21 11:11:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:15] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:11:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:15] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:11:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:15 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:15 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002360 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:16 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:16.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:16.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:16.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:11:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:16.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:11:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:16.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:11:17 np0005590810 systemd-logind[795]: New session 41 of user zuul.
Jan 21 11:11:17 np0005590810 systemd[1]: Started Session 41 of User zuul.
Jan 21 11:11:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:11:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:17 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4002090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:17 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:18 np0005590810 python3.9[115386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:11:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:18 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db0002360 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:18.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:18.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:11:19 np0005590810 python3.9[115542]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:11:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:19 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:19 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4002090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:20 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:20.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:20.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:20 np0005590810 python3.9[115735]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:11:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 339 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:11:21 np0005590810 systemd[1]: session-41.scope: Deactivated successfully.
Jan 21 11:11:21 np0005590810 systemd[1]: session-41.scope: Consumed 2.495s CPU time.
Jan 21 11:11:21 np0005590810 systemd-logind[795]: Session 41 logged out. Waiting for processes to exit.
Jan 21 11:11:21 np0005590810 systemd-logind[795]: Removed session 41.
Jan 21 11:11:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:21 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00032a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:21 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:22 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:22.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:22.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s
Jan 21 11:11:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:23 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:23 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00032a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:11:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:11:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:24 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:24.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:24.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 424 B/s rd, 0 op/s
Jan 21 11:11:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:25] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:11:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:25] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:11:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:25 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:25 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:26 np0005590810 systemd-logind[795]: New session 42 of user zuul.
Jan 21 11:11:26 np0005590810 systemd[1]: Started Session 42 of User zuul.
Jan 21 11:11:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:26 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00032a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:11:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:26.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:11:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:26.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:11:27 np0005590810 python3.9[115921]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:11:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:11:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:27 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:27 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:28 np0005590810 python3.9[116076]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:11:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:28 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0036d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:28.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:28.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:11:29 np0005590810 python3.9[116233]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:11:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:29 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00032a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:29 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00032a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:30 np0005590810 python3.9[116318]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:11:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:30 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:30.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:30.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:11:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:31 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:31 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00032a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:32 np0005590810 python3.9[116473]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:11:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:32 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003c90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:32.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:32.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:11:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:11:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:33 np0005590810 podman[116767]: 2026-01-21 16:11:33.61293539 +0000 UTC m=+0.046950640 container create 40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jones, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:11:33 np0005590810 systemd[90084]: Created slice User Background Tasks Slice.
Jan 21 11:11:33 np0005590810 systemd[90084]: Starting Cleanup of User's Temporary Files and Directories...
Jan 21 11:11:33 np0005590810 systemd[1]: Started libpod-conmon-40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5.scope.
Jan 21 11:11:33 np0005590810 systemd[90084]: Finished Cleanup of User's Temporary Files and Directories.
Jan 21 11:11:33 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:11:33 np0005590810 podman[116767]: 2026-01-21 16:11:33.682059653 +0000 UTC m=+0.116074923 container init 40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jones, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:11:33 np0005590810 podman[116767]: 2026-01-21 16:11:33.58916159 +0000 UTC m=+0.023176890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:11:33 np0005590810 podman[116767]: 2026-01-21 16:11:33.691441544 +0000 UTC m=+0.125456794 container start 40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:11:33 np0005590810 podman[116767]: 2026-01-21 16:11:33.695596352 +0000 UTC m=+0.129611612 container attach 40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:11:33 np0005590810 quirky_jones[116807]: 167 167
Jan 21 11:11:33 np0005590810 systemd[1]: libpod-40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5.scope: Deactivated successfully.
Jan 21 11:11:33 np0005590810 podman[116767]: 2026-01-21 16:11:33.699583474 +0000 UTC m=+0.133598734 container died 40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jones, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:11:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f7e7ad84f7e4cfa2c3dc1144b4f8748576790b65a09331e1c34383352b655658-merged.mount: Deactivated successfully.
Jan 21 11:11:33 np0005590810 podman[116767]: 2026-01-21 16:11:33.740343456 +0000 UTC m=+0.174358706 container remove 40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jones, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 11:11:33 np0005590810 systemd[1]: libpod-conmon-40a809ead850ded691110776d66ab77a1ac304045fb3e17b8965af221c8786a5.scope: Deactivated successfully.
Jan 21 11:11:33 np0005590810 podman[116881]: 2026-01-21 16:11:33.910153961 +0000 UTC m=+0.040613889 container create 679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:11:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:33 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:33 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:33 np0005590810 systemd[1]: Started libpod-conmon-679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6.scope.
Jan 21 11:11:33 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:11:33 np0005590810 python3.9[116875]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:11:33 np0005590810 podman[116881]: 2026-01-21 16:11:33.89265781 +0000 UTC m=+0.023117758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:11:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06121071eee0b97adff46a643ddecc3ffc92a1cdb83511ca0f350de954848d25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06121071eee0b97adff46a643ddecc3ffc92a1cdb83511ca0f350de954848d25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06121071eee0b97adff46a643ddecc3ffc92a1cdb83511ca0f350de954848d25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06121071eee0b97adff46a643ddecc3ffc92a1cdb83511ca0f350de954848d25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06121071eee0b97adff46a643ddecc3ffc92a1cdb83511ca0f350de954848d25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:34 np0005590810 podman[116881]: 2026-01-21 16:11:34.007890183 +0000 UTC m=+0.138350131 container init 679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:11:34 np0005590810 podman[116881]: 2026-01-21 16:11:34.015765474 +0000 UTC m=+0.146225402 container start 679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gauss, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:11:34 np0005590810 podman[116881]: 2026-01-21 16:11:34.019739396 +0000 UTC m=+0.150199314 container attach 679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 11:11:34 np0005590810 beautiful_gauss[116898]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:11:34 np0005590810 beautiful_gauss[116898]: --> All data devices are unavailable
Jan 21 11:11:34 np0005590810 systemd[1]: libpod-679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6.scope: Deactivated successfully.
Jan 21 11:11:34 np0005590810 podman[116881]: 2026-01-21 16:11:34.40083953 +0000 UTC m=+0.531299478 container died 679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gauss, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:11:34 np0005590810 systemd[1]: var-lib-containers-storage-overlay-06121071eee0b97adff46a643ddecc3ffc92a1cdb83511ca0f350de954848d25-merged.mount: Deactivated successfully.
Jan 21 11:11:34 np0005590810 podman[116881]: 2026-01-21 16:11:34.457864062 +0000 UTC m=+0.588323990 container remove 679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gauss, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 21 11:11:34 np0005590810 systemd[1]: libpod-conmon-679c0b9f29f2079625c554100d7f5785c244f47b9b27eff8b6cc39897cd087b6.scope: Deactivated successfully.
Jan 21 11:11:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:34 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 21 11:11:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:34.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 21 11:11:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:34.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:34 np0005590810 python3.9[117127]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:11:35 np0005590810 podman[117184]: 2026-01-21 16:11:35.068006385 +0000 UTC m=+0.049523174 container create c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_chatelet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:11:35 np0005590810 systemd[1]: Started libpod-conmon-c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23.scope.
Jan 21 11:11:35 np0005590810 podman[117184]: 2026-01-21 16:11:35.048350573 +0000 UTC m=+0.029867382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:11:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:11:35 np0005590810 podman[117184]: 2026-01-21 16:11:35.174616002 +0000 UTC m=+0.156132821 container init c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_chatelet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:11:35 np0005590810 podman[117184]: 2026-01-21 16:11:35.183490577 +0000 UTC m=+0.165007366 container start c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_chatelet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 21 11:11:35 np0005590810 podman[117184]: 2026-01-21 16:11:35.187913283 +0000 UTC m=+0.169430072 container attach c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:11:35 np0005590810 adoring_chatelet[117223]: 167 167
Jan 21 11:11:35 np0005590810 systemd[1]: libpod-c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23.scope: Deactivated successfully.
Jan 21 11:11:35 np0005590810 podman[117184]: 2026-01-21 16:11:35.191670668 +0000 UTC m=+0.173187467 container died c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:11:35 np0005590810 systemd[1]: var-lib-containers-storage-overlay-806a840f505b51c6cc36dc2f63ff804b49a5e919238a3da1dfe2b4415d210731-merged.mount: Deactivated successfully.
Jan 21 11:11:35 np0005590810 podman[117184]: 2026-01-21 16:11:35.236772485 +0000 UTC m=+0.218289274 container remove c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_chatelet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:11:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:11:35 np0005590810 systemd[1]: libpod-conmon-c460b2d7e9d66a0ecf02d169217ca57d0e8d008508059a2243d1712ff4d5fb23.scope: Deactivated successfully.
Jan 21 11:11:35 np0005590810 podman[117300]: 2026-01-21 16:11:35.41086295 +0000 UTC m=+0.051816200 container create be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:11:35 np0005590810 systemd[1]: Started libpod-conmon-be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065.scope.
Jan 21 11:11:35 np0005590810 podman[117300]: 2026-01-21 16:11:35.387369251 +0000 UTC m=+0.028322531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:11:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:11:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fc511cc693d2435d23b526bb2730e61e9b56b2290994cd33670825d731ebb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fc511cc693d2435d23b526bb2730e61e9b56b2290994cd33670825d731ebb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fc511cc693d2435d23b526bb2730e61e9b56b2290994cd33670825d731ebb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fc511cc693d2435d23b526bb2730e61e9b56b2290994cd33670825d731ebb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:35 np0005590810 podman[117300]: 2026-01-21 16:11:35.509278885 +0000 UTC m=+0.150232165 container init be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:11:35 np0005590810 podman[117300]: 2026-01-21 16:11:35.518655467 +0000 UTC m=+0.159608717 container start be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 11:11:35 np0005590810 podman[117300]: 2026-01-21 16:11:35.522705001 +0000 UTC m=+0.163658251 container attach be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:11:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:35] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:11:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:35] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]: {
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:    "0": [
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:        {
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "devices": [
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "/dev/loop3"
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            ],
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "lv_name": "ceph_lv0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "lv_size": "21470642176",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "name": "ceph_lv0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "tags": {
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.cluster_name": "ceph",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.crush_device_class": "",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.encrypted": "0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.osd_id": "0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.type": "block",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.vdo": "0",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:                "ceph.with_tpm": "0"
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            },
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "type": "block",
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:            "vg_name": "ceph_vg0"
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:        }
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]:    ]
Jan 21 11:11:35 np0005590810 cranky_bardeen[117342]: }
Jan 21 11:11:35 np0005590810 systemd[1]: libpod-be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065.scope: Deactivated successfully.
Jan 21 11:11:35 np0005590810 podman[117300]: 2026-01-21 16:11:35.886840572 +0000 UTC m=+0.527793822 container died be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 11:11:35 np0005590810 systemd[1]: var-lib-containers-storage-overlay-80fc511cc693d2435d23b526bb2730e61e9b56b2290994cd33670825d731ebb2-merged.mount: Deactivated successfully.
Jan 21 11:11:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:35 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003cb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:35 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db40047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:35 np0005590810 podman[117300]: 2026-01-21 16:11:35.943987886 +0000 UTC m=+0.584941136 container remove be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 21 11:11:35 np0005590810 systemd[1]: libpod-conmon-be8c0e34c2911dbdac8db58fdb262969774b5c167b085873386cc7b968fd7065.scope: Deactivated successfully.
Jan 21 11:11:35 np0005590810 python3.9[117424]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:11:36 np0005590810 python3.9[117565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:11:36 np0005590810 podman[117606]: 2026-01-21 16:11:36.558898509 +0000 UTC m=+0.043104806 container create 8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_brown, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:11:36 np0005590810 systemd[1]: Started libpod-conmon-8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f.scope.
Jan 21 11:11:36 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:11:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:36 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:36 np0005590810 podman[117606]: 2026-01-21 16:11:36.541202995 +0000 UTC m=+0.025409322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:11:36 np0005590810 podman[117606]: 2026-01-21 16:11:36.6481914 +0000 UTC m=+0.132397737 container init 8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_brown, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:11:36 np0005590810 podman[117606]: 2026-01-21 16:11:36.658893881 +0000 UTC m=+0.143100188 container start 8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:11:36 np0005590810 podman[117606]: 2026-01-21 16:11:36.663337832 +0000 UTC m=+0.147544159 container attach 8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Jan 21 11:11:36 np0005590810 systemd[1]: libpod-8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f.scope: Deactivated successfully.
Jan 21 11:11:36 np0005590810 jovial_brown[117644]: 167 167
Jan 21 11:11:36 np0005590810 conmon[117644]: conmon 8a427a357c6ee7ef058e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f.scope/container/memory.events
Jan 21 11:11:36 np0005590810 podman[117606]: 2026-01-21 16:11:36.668511137 +0000 UTC m=+0.152717444 container died 8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_brown, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Jan 21 11:11:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:36.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:36 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ba31c36796e9a6b6bfc71f714271a909b976e1421ad85acf89236e13ed1cbeb6-merged.mount: Deactivated successfully.
Jan 21 11:11:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:36 np0005590810 podman[117606]: 2026-01-21 16:11:36.717572324 +0000 UTC m=+0.201778631 container remove 8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:11:36 np0005590810 systemd[1]: libpod-conmon-8a427a357c6ee7ef058edb1ce12e1d916957b13402bd10f98c124ac75b439e1f.scope: Deactivated successfully.
Jan 21 11:11:36 np0005590810 podman[117695]: 2026-01-21 16:11:36.906979288 +0000 UTC m=+0.047385933 container create 92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pasteur, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 11:11:36 np0005590810 systemd[1]: Started libpod-conmon-92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d.scope.
Jan 21 11:11:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:36.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:11:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:36.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:11:36 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:11:36 np0005590810 podman[117695]: 2026-01-21 16:11:36.887082313 +0000 UTC m=+0.027488978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:11:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072442affac85c211f34c7560cb9790cf8ca48d03fcd614fcc1f2d752fff6ea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072442affac85c211f34c7560cb9790cf8ca48d03fcd614fcc1f2d752fff6ea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072442affac85c211f34c7560cb9790cf8ca48d03fcd614fcc1f2d752fff6ea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072442affac85c211f34c7560cb9790cf8ca48d03fcd614fcc1f2d752fff6ea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:11:37 np0005590810 podman[117695]: 2026-01-21 16:11:37.004489239 +0000 UTC m=+0.144895924 container init 92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 21 11:11:37 np0005590810 podman[117695]: 2026-01-21 16:11:37.012784014 +0000 UTC m=+0.153190669 container start 92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pasteur, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:11:37 np0005590810 podman[117695]: 2026-01-21 16:11:37.016280936 +0000 UTC m=+0.156687611 container attach 92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 21 11:11:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:11:37 np0005590810 python3.9[117839]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:11:37 np0005590810 lvm[117913]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:11:37 np0005590810 lvm[117913]: VG ceph_vg0 finished
Jan 21 11:11:37 np0005590810 brave_pasteur[117740]: {}
Jan 21 11:11:37 np0005590810 systemd[1]: libpod-92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d.scope: Deactivated successfully.
Jan 21 11:11:37 np0005590810 systemd[1]: libpod-92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d.scope: Consumed 1.331s CPU time.
Jan 21 11:11:37 np0005590810 podman[117695]: 2026-01-21 16:11:37.818483996 +0000 UTC m=+0.958890651 container died 92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pasteur, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:11:37 np0005590810 systemd[1]: var-lib-containers-storage-overlay-072442affac85c211f34c7560cb9790cf8ca48d03fcd614fcc1f2d752fff6ea2-merged.mount: Deactivated successfully.
Jan 21 11:11:37 np0005590810 podman[117695]: 2026-01-21 16:11:37.875711232 +0000 UTC m=+1.016117877 container remove 92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pasteur, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:11:37 np0005590810 systemd[1]: libpod-conmon-92430f55d00bcb0f685f85492e1ea2d55c123800c90640de1cb14977bbd4884d.scope: Deactivated successfully.
Jan 21 11:11:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:37 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:37 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:11:38 np0005590810 python3.9[117984]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:11:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:11:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:11:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:38 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db40047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:38.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:11:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:38.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:11:39 np0005590810 python3.9[118137]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:11:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:11:39
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.control', 'vms', '.rgw.root', 'default.rgw.meta', 'backups', 'volumes', 'images']
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:11:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:11:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:11:39 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:11:39 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:11:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:11:39 np0005590810 python3.9[118315]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:11:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:39 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:39 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:40 np0005590810 python3.9[118467]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:11:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:40 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:40.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:40 np0005590810 python3.9[118620]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:11:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 op/s
Jan 21 11:11:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:41 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:41 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:42 np0005590810 python3.9[118773]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:11:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:42 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac001080 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:42.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:11:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:42.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:11:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:11:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:43 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:43 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003cd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:44 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:44.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:44.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:44 np0005590810 python3.9[118930]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:11:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 op/s
Jan 21 11:11:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161145 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:11:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:45] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Jan 21 11:11:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:45] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Jan 21 11:11:45 np0005590810 python3.9[119086]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:11:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:45 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac001080 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:45 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac001080 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:46 np0005590810 python3.9[119238]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:11:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:46 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:46.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:46.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161146 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:11:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:46.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:11:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:11:47 np0005590810 python3.9[119392]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:11:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:47 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:47 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac001080 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:48 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003e90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:48 np0005590810 python3.9[119545]: ansible-service_facts Invoked
Jan 21 11:11:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:48.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:48 np0005590810 network[119563]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 11:11:48 np0005590810 network[119564]: 'network-scripts' will be removed from distribution in near future.
Jan 21 11:11:48 np0005590810 network[119565]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 11:11:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:48.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:11:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:49 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:49 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:50 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac002e70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:50.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:50.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:11:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:51 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003eb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:51 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:52 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:11:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:52.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:11:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:52.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:11:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:11:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac002e70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:53 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003ed0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:54 np0005590810 python3.9[120022]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:11:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:11:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:11:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:54 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:11:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:54.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:11:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:11:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:55] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:11:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:11:55] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:11:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:55 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:55 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac002e70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:56 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:11:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:56 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:11:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:56 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:11:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:56.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:11:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:11:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:56.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:11:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:56.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:11:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:11:56.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:11:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:11:57 np0005590810 python3.9[120204]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 21 11:11:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:57 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:11:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:57 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:57 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:11:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:58 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac004300 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:11:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:11:58.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:11:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:11:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:11:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:11:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:11:58 np0005590810 python3.9[120357]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:11:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:11:59 np0005590810 python3.9[120436]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:11:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:59 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003f10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:11:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:11:59 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:00 np0005590810 python3.9[120588]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:00 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:00.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:00 np0005590810 python3.9[120666]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:01 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:12:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:12:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:01 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:01 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:02 np0005590810 python3.9[120820]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:02 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:02.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:12:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:03 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac004300 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:03 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:04 np0005590810 python3.9[120974]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:12:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:04 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:04.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:04.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:12:05 np0005590810 python3.9[121059]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:12:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161205 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:12:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:05] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:12:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:05] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:12:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:05 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:05 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac004300 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:06 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:06.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:06.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161206 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:12:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:06.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:12:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:06.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:12:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:06.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:12:07 np0005590810 systemd[1]: session-42.scope: Deactivated successfully.
Jan 21 11:12:07 np0005590810 systemd[1]: session-42.scope: Consumed 25.237s CPU time.
Jan 21 11:12:07 np0005590810 systemd-logind[795]: Session 42 logged out. Waiting for processes to exit.
Jan 21 11:12:07 np0005590810 systemd-logind[795]: Removed session 42.
Jan 21 11:12:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Jan 21 11:12:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:07 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003f70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:07 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:08 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac004300 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:08.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:08.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:12:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:12:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:12:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:12:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Jan 21 11:12:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:12:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:12:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:12:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:12:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:09 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:09 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:10 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:10.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:12:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:10.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:12:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Jan 21 11:12:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:11 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac004300 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:11 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:12 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d90003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:12.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:12.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:12:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:14 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:14 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9dac004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:14 np0005590810 systemd-logind[795]: New session 43 of user zuul.
Jan 21 11:12:14 np0005590810 systemd[1]: Started Session 43 of User zuul.
Jan 21 11:12:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:14 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d9c0043e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:14.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:12:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:14.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:12:14 np0005590810 python3.9[121256]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:12:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:15] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:12:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:15] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:12:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:15 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9db4001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[106772]: 21/01/2026 16:12:15 : epoch 6970fa51 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da40008d0 fd 42 proxy ignored for local
Jan 21 11:12:15 np0005590810 kernel: ganesha.nfsd[121095]: segfault at 50 ip 00007f9e40de832e sp 00007f9dba7fb210 error 4 in libntirpc.so.5.8[7f9e40dcd000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 21 11:12:15 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:12:15 np0005590810 systemd[1]: Started Process Core Dump (PID 121435/UID 0).
Jan 21 11:12:16 np0005590810 python3.9[121434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:16 np0005590810 python3.9[121514]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:16.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:16 np0005590810 systemd[1]: session-43.scope: Deactivated successfully.
Jan 21 11:12:16 np0005590810 systemd[1]: session-43.scope: Consumed 1.643s CPU time.
Jan 21 11:12:16 np0005590810 systemd-logind[795]: Session 43 logged out. Waiting for processes to exit.
Jan 21 11:12:16 np0005590810 systemd-logind[795]: Removed session 43.
Jan 21 11:12:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:16.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:12:17 np0005590810 systemd-coredump[121436]: Process 106777 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 61:#012#0  0x00007f9e40de832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:12:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:12:17 np0005590810 systemd[1]: systemd-coredump@3-121435-0.service: Deactivated successfully.
Jan 21 11:12:17 np0005590810 systemd[1]: systemd-coredump@3-121435-0.service: Consumed 1.231s CPU time.
Jan 21 11:12:17 np0005590810 podman[121545]: 2026-01-21 16:12:17.373735133 +0000 UTC m=+0.036955880 container died 183fce5b37958e09aaaa8f5501c79b2219f76131ce3829517233daa26012bbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 21 11:12:17 np0005590810 systemd[1]: var-lib-containers-storage-overlay-26b5886d2179bb00101b03dca385047336456e8799f0b4a1c29ad3d81ba988f0-merged.mount: Deactivated successfully.
Jan 21 11:12:17 np0005590810 podman[121545]: 2026-01-21 16:12:17.579069476 +0000 UTC m=+0.242290193 container remove 183fce5b37958e09aaaa8f5501c79b2219f76131ce3829517233daa26012bbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:12:17 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:12:17 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:12:17 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.785s CPU time.
Jan 21 11:12:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:18.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:18.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:12:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:20.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:20.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:12:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161221 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:12:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:22.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:12:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:24 np0005590810 systemd-logind[795]: New session 44 of user zuul.
Jan 21 11:12:24 np0005590810 systemd[1]: Started Session 44 of User zuul.
Jan 21 11:12:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:12:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:12:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:24.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:24.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:12:25 np0005590810 python3.9[121749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:12:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:25] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:12:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:25] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:12:26 np0005590810 python3.9[121908]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:26.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:26.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:26.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:12:27 np0005590810 python3.9[122084]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:12:27 np0005590810 python3.9[122163]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.644kevo2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:27 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 4.
Jan 21 11:12:27 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:12:27 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.785s CPU time.
Jan 21 11:12:27 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:12:28 np0005590810 podman[122235]: 2026-01-21 16:12:28.048412548 +0000 UTC m=+0.038034815 container create 9857929bc801b6e06865a8c2f41dbee78deda79cfde17ae150dd979a7a98e8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:12:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36596e16f84ac66ca5107ebb8e98be239e5103f23dd7ab8813c2b76c6fea40/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36596e16f84ac66ca5107ebb8e98be239e5103f23dd7ab8813c2b76c6fea40/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36596e16f84ac66ca5107ebb8e98be239e5103f23dd7ab8813c2b76c6fea40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36596e16f84ac66ca5107ebb8e98be239e5103f23dd7ab8813c2b76c6fea40/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:28 np0005590810 podman[122235]: 2026-01-21 16:12:28.115474139 +0000 UTC m=+0.105096426 container init 9857929bc801b6e06865a8c2f41dbee78deda79cfde17ae150dd979a7a98e8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:12:28 np0005590810 podman[122235]: 2026-01-21 16:12:28.122364858 +0000 UTC m=+0.111987135 container start 9857929bc801b6e06865a8c2f41dbee78deda79cfde17ae150dd979a7a98e8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:12:28 np0005590810 bash[122235]: 9857929bc801b6e06865a8c2f41dbee78deda79cfde17ae150dd979a7a98e8d6
Jan 21 11:12:28 np0005590810 podman[122235]: 2026-01-21 16:12:28.030547458 +0000 UTC m=+0.020169765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:12:28 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:12:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:28 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:12:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:28 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:12:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:28 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:12:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:28 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:12:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:28 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:12:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:28 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:12:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:28 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:12:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:28 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:12:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:28 np0005590810 python3.9[122419]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:28.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:12:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:28.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:12:29 np0005590810 python3.9[122498]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.dednnlxu recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:12:30 np0005590810 python3.9[122651]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:12:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:30.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:30.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:30 np0005590810 python3.9[122803]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:31 np0005590810 python3.9[122882]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:12:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:12:31 np0005590810 python3.9[123035]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:32 np0005590810 python3.9[123113]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:12:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:32.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:32.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:33 np0005590810 python3.9[123266]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:12:33 np0005590810 systemd[1]: session-18.scope: Deactivated successfully.
Jan 21 11:12:33 np0005590810 systemd[1]: session-18.scope: Consumed 1min 32.306s CPU time.
Jan 21 11:12:33 np0005590810 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Jan 21 11:12:33 np0005590810 systemd-logind[795]: Removed session 18.
Jan 21 11:12:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:34 np0005590810 python3.9[123419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:34 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:12:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:34 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:12:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:34 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:12:34 np0005590810 python3.9[123497]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:34.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:34.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:12:35 np0005590810 python3.9[123651]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:35] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:12:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:35] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:12:35 np0005590810 python3.9[123730]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:36.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:36.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:12:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:12:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:36.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:12:37 np0005590810 python3.9[123907]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:12:37 np0005590810 systemd[1]: Reloading.
Jan 21 11:12:37 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:12:37 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:12:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:12:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161237 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:12:38 np0005590810 python3.9[124097]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:38 np0005590810 python3.9[124175]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:38.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:38.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:38 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:12:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:38 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:12:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:38 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:12:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:39 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.033171) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011959033249, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1379, "num_deletes": 250, "total_data_size": 2604616, "memory_usage": 2651416, "flush_reason": "Manual Compaction"}
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011959055000, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1532805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10796, "largest_seqno": 12174, "table_properties": {"data_size": 1528000, "index_size": 2201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12138, "raw_average_key_size": 19, "raw_value_size": 1517622, "raw_average_value_size": 2496, "num_data_blocks": 98, "num_entries": 608, "num_filter_entries": 608, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011822, "oldest_key_time": 1769011822, "file_creation_time": 1769011959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 21905 microseconds, and 4404 cpu microseconds.
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.055085) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1532805 bytes OK
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.055105) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.060417) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.060440) EVENT_LOG_v1 {"time_micros": 1769011959060433, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.060462) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2598733, prev total WAL file size 2599437, number of live WAL files 2.
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.061792) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1496KB)], [26(11MB)]
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011959061830, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 14099558, "oldest_snapshot_seqno": -1}
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4165 keys, 11965040 bytes, temperature: kUnknown
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011959147089, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 11965040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11933249, "index_size": 20289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 105851, "raw_average_key_size": 25, "raw_value_size": 11853238, "raw_average_value_size": 2845, "num_data_blocks": 870, "num_entries": 4165, "num_filter_entries": 4165, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769011959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.147371) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 11965040 bytes
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.148560) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 12.0 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(17.0) write-amplify(7.8) OK, records in: 4617, records dropped: 452 output_compression: NoCompression
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.148600) EVENT_LOG_v1 {"time_micros": 1769011959148585, "job": 10, "event": "compaction_finished", "compaction_time_micros": 85340, "compaction_time_cpu_micros": 35543, "output_level": 6, "num_output_files": 1, "total_output_size": 11965040, "num_input_records": 4617, "num_output_records": 4165, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011959149204, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769011959151352, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.061699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.151419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.151423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.151424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.151425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:12:39.151427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:12:39
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', '.rgw.root', '.mgr', '.nfs', 'default.rgw.meta']
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:12:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:12:39 np0005590810 python3.9[124329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:12:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:12:39 np0005590810 python3.9[124457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:12:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:12:40 np0005590810 python3.9[124691]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:12:40 np0005590810 systemd[1]: Reloading.
Jan 21 11:12:40 np0005590810 podman[124735]: 2026-01-21 16:12:40.653685866 +0000 UTC m=+0.046716496 container create b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heisenberg, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:12:40 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:12:40 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:12:40 np0005590810 podman[124735]: 2026-01-21 16:12:40.631006394 +0000 UTC m=+0.024037044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:12:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:40.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:40.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:40 np0005590810 systemd[1]: Started libpod-conmon-b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e.scope.
Jan 21 11:12:40 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:12:40 np0005590810 podman[124735]: 2026-01-21 16:12:40.977178082 +0000 UTC m=+0.370208732 container init b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heisenberg, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 21 11:12:40 np0005590810 podman[124735]: 2026-01-21 16:12:40.985315557 +0000 UTC m=+0.378346177 container start b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:12:40 np0005590810 podman[124735]: 2026-01-21 16:12:40.988362793 +0000 UTC m=+0.381393423 container attach b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:12:40 np0005590810 angry_heisenberg[124790]: 167 167
Jan 21 11:12:40 np0005590810 podman[124735]: 2026-01-21 16:12:40.990658445 +0000 UTC m=+0.383689065 container died b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:12:40 np0005590810 systemd[1]: Starting Create netns directory...
Jan 21 11:12:40 np0005590810 systemd[1]: libpod-b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e.scope: Deactivated successfully.
Jan 21 11:12:41 np0005590810 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 11:12:41 np0005590810 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 11:12:41 np0005590810 systemd[1]: Finished Create netns directory.
Jan 21 11:12:41 np0005590810 systemd[1]: var-lib-containers-storage-overlay-816f27eff177f2a6ec1ea112653bd39b8869d5b1c4bfdfbd6212c6e3320bab1d-merged.mount: Deactivated successfully.
Jan 21 11:12:41 np0005590810 podman[124735]: 2026-01-21 16:12:41.032163377 +0000 UTC m=+0.425194007 container remove b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heisenberg, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:12:41 np0005590810 systemd[1]: libpod-conmon-b1541d966c90ccba9119fd05cba29ab918d540d078fc65362b2691257f5fdc8e.scope: Deactivated successfully.
Jan 21 11:12:41 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:12:41 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:12:41 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:12:41 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:12:41 np0005590810 podman[124843]: 2026-01-21 16:12:41.186862379 +0000 UTC m=+0.040148120 container create 8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 21 11:12:41 np0005590810 systemd[1]: Started libpod-conmon-8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50.scope.
Jan 21 11:12:41 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:12:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0c8186038df0bc02b60e15460fca12b9c75b8943ebe01cf609081988630a52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0c8186038df0bc02b60e15460fca12b9c75b8943ebe01cf609081988630a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0c8186038df0bc02b60e15460fca12b9c75b8943ebe01cf609081988630a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0c8186038df0bc02b60e15460fca12b9c75b8943ebe01cf609081988630a52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0c8186038df0bc02b60e15460fca12b9c75b8943ebe01cf609081988630a52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:41 np0005590810 podman[124843]: 2026-01-21 16:12:41.263684349 +0000 UTC m=+0.116970110 container init 8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 21 11:12:41 np0005590810 podman[124843]: 2026-01-21 16:12:41.17033604 +0000 UTC m=+0.023621801 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:12:41 np0005590810 podman[124843]: 2026-01-21 16:12:41.272486445 +0000 UTC m=+0.125772186 container start 8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_snyder, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:12:41 np0005590810 podman[124843]: 2026-01-21 16:12:41.276340575 +0000 UTC m=+0.129626346 container attach 8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:12:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Jan 21 11:12:41 np0005590810 distracted_snyder[124860]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:12:41 np0005590810 distracted_snyder[124860]: --> All data devices are unavailable
Jan 21 11:12:41 np0005590810 systemd[1]: libpod-8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50.scope: Deactivated successfully.
Jan 21 11:12:41 np0005590810 podman[124843]: 2026-01-21 16:12:41.584712248 +0000 UTC m=+0.437997999 container died 8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:12:41 np0005590810 podman[124843]: 2026-01-21 16:12:41.633641183 +0000 UTC m=+0.486926924 container remove 8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_snyder, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:12:41 np0005590810 systemd[1]: libpod-conmon-8aac345f7b2d89a68ecfe1caba16d5c4d87996f910e0fdfa289dc90e783f7b50.scope: Deactivated successfully.
Jan 21 11:12:41 np0005590810 python3.9[125011]: ansible-ansible.builtin.service_facts Invoked
Jan 21 11:12:41 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8c0c8186038df0bc02b60e15460fca12b9c75b8943ebe01cf609081988630a52-merged.mount: Deactivated successfully.
Jan 21 11:12:41 np0005590810 network[125078]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 11:12:41 np0005590810 network[125079]: 'network-scripts' will be removed from distribution in near future.
Jan 21 11:12:41 np0005590810 network[125080]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 11:12:42 np0005590810 podman[125126]: 2026-01-21 16:12:42.146801669 +0000 UTC m=+0.036833496 container create 10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_cerf, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:12:42 np0005590810 podman[125126]: 2026-01-21 16:12:42.129893879 +0000 UTC m=+0.019925726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:12:42 np0005590810 systemd[1]: Started libpod-conmon-10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7.scope.
Jan 21 11:12:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:12:42 np0005590810 podman[125126]: 2026-01-21 16:12:42.656049942 +0000 UTC m=+0.546081799 container init 10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:12:42 np0005590810 podman[125126]: 2026-01-21 16:12:42.66332786 +0000 UTC m=+0.553359687 container start 10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_cerf, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:12:42 np0005590810 agitated_cerf[125144]: 167 167
Jan 21 11:12:42 np0005590810 systemd[1]: libpod-10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7.scope: Deactivated successfully.
Jan 21 11:12:42 np0005590810 podman[125126]: 2026-01-21 16:12:42.670262588 +0000 UTC m=+0.560294425 container attach 10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_cerf, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:12:42 np0005590810 podman[125126]: 2026-01-21 16:12:42.672940112 +0000 UTC m=+0.562971939 container died 10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:12:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay-47771cd04ac05f90e43b48dfa5e99e9ed99615afeb90329117b30cbd832cff9f-merged.mount: Deactivated successfully.
Jan 21 11:12:42 np0005590810 podman[125126]: 2026-01-21 16:12:42.714025701 +0000 UTC m=+0.604057528 container remove 10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_cerf, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:12:42 np0005590810 systemd[1]: libpod-conmon-10d50a5ced2781bce73c0454e7eca31bd6f71059b67961289010c0da5ec293f7.scope: Deactivated successfully.
Jan 21 11:12:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:42.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:42.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:42 np0005590810 podman[125182]: 2026-01-21 16:12:42.871771989 +0000 UTC m=+0.043759174 container create bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 21 11:12:42 np0005590810 systemd[1]: Started libpod-conmon-bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a.scope.
Jan 21 11:12:42 np0005590810 podman[125182]: 2026-01-21 16:12:42.851895505 +0000 UTC m=+0.023882690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:12:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:12:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2853278a8f1e6ae817faaad18ad64575bda229075e963e72ba1a1621257e83bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2853278a8f1e6ae817faaad18ad64575bda229075e963e72ba1a1621257e83bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2853278a8f1e6ae817faaad18ad64575bda229075e963e72ba1a1621257e83bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2853278a8f1e6ae817faaad18ad64575bda229075e963e72ba1a1621257e83bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:42 np0005590810 podman[125182]: 2026-01-21 16:12:42.968875264 +0000 UTC m=+0.140862459 container init bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:12:42 np0005590810 podman[125182]: 2026-01-21 16:12:42.977389162 +0000 UTC m=+0.149376347 container start bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_williamson, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:12:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:42 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:12:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:42 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:12:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:42 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:12:42 np0005590810 podman[125182]: 2026-01-21 16:12:42.982374448 +0000 UTC m=+0.154361673 container attach bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_williamson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]: {
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:    "0": [
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:        {
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "devices": [
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "/dev/loop3"
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            ],
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "lv_name": "ceph_lv0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "lv_size": "21470642176",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "name": "ceph_lv0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "tags": {
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.cluster_name": "ceph",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.crush_device_class": "",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.encrypted": "0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.osd_id": "0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.type": "block",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.vdo": "0",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:                "ceph.with_tpm": "0"
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            },
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "type": "block",
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:            "vg_name": "ceph_vg0"
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:        }
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]:    ]
Jan 21 11:12:43 np0005590810 interesting_williamson[125204]: }
Jan 21 11:12:43 np0005590810 systemd[1]: libpod-bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a.scope: Deactivated successfully.
Jan 21 11:12:43 np0005590810 podman[125182]: 2026-01-21 16:12:43.269271617 +0000 UTC m=+0.441258802 container died bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_williamson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:12:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 21 11:12:43 np0005590810 systemd[1]: var-lib-containers-storage-overlay-2853278a8f1e6ae817faaad18ad64575bda229075e963e72ba1a1621257e83bd-merged.mount: Deactivated successfully.
Jan 21 11:12:43 np0005590810 podman[125182]: 2026-01-21 16:12:43.319466171 +0000 UTC m=+0.491453356 container remove bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:12:43 np0005590810 systemd[1]: libpod-conmon-bdfb3d535e83f6893b3ed94f5099fbb37194f1093bf5b98ca7621ce6725c2b9a.scope: Deactivated successfully.
Jan 21 11:12:43 np0005590810 podman[125342]: 2026-01-21 16:12:43.921380931 +0000 UTC m=+0.040846273 container create c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:12:43 np0005590810 systemd[1]: Started libpod-conmon-c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14.scope.
Jan 21 11:12:43 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:12:43 np0005590810 podman[125342]: 2026-01-21 16:12:43.904300365 +0000 UTC m=+0.023765727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:12:44 np0005590810 podman[125342]: 2026-01-21 16:12:44.004661393 +0000 UTC m=+0.124126765 container init c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_johnson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:12:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:44 np0005590810 podman[125342]: 2026-01-21 16:12:44.013085927 +0000 UTC m=+0.132551369 container start c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_johnson, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:12:44 np0005590810 sleepy_johnson[125358]: 167 167
Jan 21 11:12:44 np0005590810 systemd[1]: libpod-c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14.scope: Deactivated successfully.
Jan 21 11:12:44 np0005590810 podman[125342]: 2026-01-21 16:12:44.020360525 +0000 UTC m=+0.139825887 container attach c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_johnson, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:12:44 np0005590810 podman[125342]: 2026-01-21 16:12:44.020876061 +0000 UTC m=+0.140341403 container died c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_johnson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:12:44 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6743d99f64176b12a7809b9de2597ca17f9c7a37ce69c743fec7abd63b86ab85-merged.mount: Deactivated successfully.
Jan 21 11:12:44 np0005590810 podman[125342]: 2026-01-21 16:12:44.060334219 +0000 UTC m=+0.179799551 container remove c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_johnson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 11:12:44 np0005590810 systemd[1]: libpod-conmon-c7ae8b03f53d1b8bc509e449f34eb2d977895a2aa414ebe39686c1ad43827d14.scope: Deactivated successfully.
Jan 21 11:12:44 np0005590810 podman[125382]: 2026-01-21 16:12:44.217199939 +0000 UTC m=+0.045026303 container create a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:12:44 np0005590810 systemd[1]: Started libpod-conmon-a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee.scope.
Jan 21 11:12:44 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:12:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ca852298e594a31bda4d20a8ad2aee575e5775d01dc981f303270b0d21d91f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ca852298e594a31bda4d20a8ad2aee575e5775d01dc981f303270b0d21d91f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ca852298e594a31bda4d20a8ad2aee575e5775d01dc981f303270b0d21d91f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ca852298e594a31bda4d20a8ad2aee575e5775d01dc981f303270b0d21d91f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:12:44 np0005590810 podman[125382]: 2026-01-21 16:12:44.197920365 +0000 UTC m=+0.025746749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:12:44 np0005590810 podman[125382]: 2026-01-21 16:12:44.301818964 +0000 UTC m=+0.129645348 container init a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:12:44 np0005590810 podman[125382]: 2026-01-21 16:12:44.309317869 +0000 UTC m=+0.137144233 container start a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:12:44 np0005590810 podman[125382]: 2026-01-21 16:12:44.312967323 +0000 UTC m=+0.140793837 container attach a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:12:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:44.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:44.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:45 np0005590810 lvm[125501]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:12:45 np0005590810 lvm[125501]: VG ceph_vg0 finished
Jan 21 11:12:45 np0005590810 youthful_fermi[125398]: {}
Jan 21 11:12:45 np0005590810 systemd[1]: libpod-a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee.scope: Deactivated successfully.
Jan 21 11:12:45 np0005590810 systemd[1]: libpod-a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee.scope: Consumed 1.245s CPU time.
Jan 21 11:12:45 np0005590810 podman[125382]: 2026-01-21 16:12:45.105276465 +0000 UTC m=+0.933102869 container died a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:12:45 np0005590810 systemd[1]: var-lib-containers-storage-overlay-04ca852298e594a31bda4d20a8ad2aee575e5775d01dc981f303270b0d21d91f-merged.mount: Deactivated successfully.
Jan 21 11:12:45 np0005590810 podman[125382]: 2026-01-21 16:12:45.156866843 +0000 UTC m=+0.984693207 container remove a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:12:45 np0005590810 systemd[1]: libpod-conmon-a64c931860f1cbfd1617a433e8d143db23760e7f5c985595ef335e7cf886b8ee.scope: Deactivated successfully.
Jan 21 11:12:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:12:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:12:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:12:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:12:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Jan 21 11:12:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:45] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Jan 21 11:12:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:45] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Jan 21 11:12:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:12:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:12:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:46.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:46.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:46.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:12:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:46.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:12:47 np0005590810 python3.9[125727]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 21 11:12:47 np0005590810 python3.9[125806]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:48 np0005590810 python3.9[125958]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:48.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:12:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:48.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:12:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:48 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:12:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:12:49 np0005590810 python3.9[126111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 21 11:12:49 np0005590810 python3.9[126202]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5508000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:49 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:50 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:50.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:50.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:50 np0005590810 python3.9[126358]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 11:12:50 np0005590810 systemd[1]: Starting Time & Date Service...
Jan 21 11:12:51 np0005590810 systemd[1]: Started Time & Date Service.
Jan 21 11:12:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 21 11:12:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:12:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2573 writes, 12K keys, 2572 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 2573 writes, 2572 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2573 writes, 12K keys, 2572 commit groups, 1.0 writes per commit group, ingest: 19.86 MB, 0.03 MB/s#012Interval WAL: 2573 writes, 2572 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     73.4      0.22              0.05         5    0.044       0      0       0.0       0.0#012  L6      1/0   11.41 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    122.0    109.5      0.39              0.11         4    0.098     17K   1836       0.0       0.0#012 Sum      1/0   11.41 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6     77.9     96.5      0.61              0.15         9    0.068     17K   1836       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6     78.5     97.1      0.61              0.15         8    0.076     17K   1836       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    122.0    109.5      0.39              0.11         4    0.098     17K   1836       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     74.8      0.22              0.05         4    0.054       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.0      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.016#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.10 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.6 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e6f7731350#2 capacity: 304.00 MB usage: 1.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(92,1.02 MB,0.334895%) FilterBlock(10,57.48 KB,0.0184661%) IndexBlock(10,114.98 KB,0.0369373%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 11:12:51 np0005590810 python3.9[126516]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161251 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:12:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:51 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55000013a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:51 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:52 : epoch 6970faec : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:12:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:52 : epoch 6970faec : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:12:52 np0005590810 python3.9[126668]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:52 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:12:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:52.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:12:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:52.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:53 np0005590810 python3.9[126747]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 21 11:12:53 np0005590810 python3.9[126900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:53 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:53 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5500002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:12:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:12:54 np0005590810 python3.9[126978]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.uobs4u_3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:54 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:54.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:12:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:12:54 np0005590810 python3.9[127131]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:55 : epoch 6970faec : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:12:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 21 11:12:55 np0005590810 python3.9[127210]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:55] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:12:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:12:55] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:12:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:55 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:56 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:56 np0005590810 python3.9[127387]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:12:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:56 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5500002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:56.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:12:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:56.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:12:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:56.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:12:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:12:56.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:12:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:12:57 np0005590810 python3[127541]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 11:12:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161257 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:12:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:57 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:58 np0005590810 python3.9[127694]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:58 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:58 np0005590810 python3.9[127772]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:58 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:12:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:12:58.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:12:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:12:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:12:58.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:12:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:12:59 np0005590810 python3.9[127925]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:12:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:12:59 np0005590810 python3.9[128051]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769011978.759677-894-146550957373117/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:12:59 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:12:59 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:12:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:12:59 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5500002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:00 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:00 np0005590810 python3.9[128204]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:13:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:00 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:00.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:00.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:01 np0005590810 python3.9[128283]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:13:01 np0005590810 python3.9[128436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:13:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:01 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:02 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5500003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:02 np0005590810 python3.9[128514]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:02 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:02.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:02.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:03 np0005590810 python3.9[128667]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:13:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 21 11:13:03 np0005590810 python3.9[128746]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:03 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:04 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:04 np0005590810 python3.9[128898]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:13:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:04 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5500003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:04.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:04.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Jan 21 11:13:05 np0005590810 python3.9[129055]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:05] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:13:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:05] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:13:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:05 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:06 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:06 np0005590810 python3.9[129207]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:06 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:06.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:06 np0005590810 python3.9[129360]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:13:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:06.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:13:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:06.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:13:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:07 np0005590810 python3.9[129513]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 11:13:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:08 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5500003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:08 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:08 np0005590810 python3.9[129665]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 11:13:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:08 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:08.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:08.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:09 np0005590810 systemd[1]: session-44.scope: Deactivated successfully.
Jan 21 11:13:09 np0005590810 systemd[1]: session-44.scope: Consumed 29.771s CPU time.
Jan 21 11:13:09 np0005590810 systemd-logind[795]: Session 44 logged out. Waiting for processes to exit.
Jan 21 11:13:09 np0005590810 systemd-logind[795]: Removed session 44.
Jan 21 11:13:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:13:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:13:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:13:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:13:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:13:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:13:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:13:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:13:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:10 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:10 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5500004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:10 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:10.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:12 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:12 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:12 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:13:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:13:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:14 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:14 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:14 np0005590810 systemd-logind[795]: New session 45 of user zuul.
Jan 21 11:13:14 np0005590810 systemd[1]: Started Session 45 of User zuul.
Jan 21 11:13:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:14 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:14.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:14.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:13:15 np0005590810 python3.9[129853]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 21 11:13:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:15] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:13:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:15] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 21 11:13:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:16 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:16 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:16 np0005590810 python3.9[130030]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:13:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:16 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:16.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:16.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:16.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:13:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:16.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:13:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:16.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:13:17 np0005590810 python3.9[130185]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 21 11:13:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:17 np0005590810 python3.9[130338]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.dcklx2wr follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:13:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:18 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f8002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:18 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54e0003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:18 np0005590810 python3.9[130463]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.dcklx2wr mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769011997.3709726-102-149912635071966/.source.dcklx2wr _original_basename=.kf2e3sbt follow=False checksum=420ecd086663a55a837fc958078127463de53c3d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[122250]: 21/01/2026 16:13:18 : epoch 6970faec : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec004050 fd 39 proxy ignored for local
Jan 21 11:13:18 np0005590810 kernel: ganesha.nfsd[126227]: segfault at 50 ip 00007f55909a932e sp 00007f54f5ffa210 error 4 in libntirpc.so.5.8[7f559098e000+2c000] likely on CPU 6 (core 0, socket 6)
Jan 21 11:13:18 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:13:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:18.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:18 np0005590810 systemd[1]: Started Process Core Dump (PID 130489/UID 0).
Jan 21 11:13:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:18.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:20 np0005590810 python3.9[130619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:13:20 np0005590810 systemd-coredump[130490]: Process 122254 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 52:#012#0  0x00007f55909a932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:13:20 np0005590810 systemd[1]: systemd-coredump@4-130489-0.service: Deactivated successfully.
Jan 21 11:13:20 np0005590810 systemd[1]: systemd-coredump@4-130489-0.service: Consumed 1.268s CPU time.
Jan 21 11:13:20 np0005590810 podman[130648]: 2026-01-21 16:13:20.541662015 +0000 UTC m=+0.031975854 container died 9857929bc801b6e06865a8c2f41dbee78deda79cfde17ae150dd979a7a98e8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:13:20 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5b36596e16f84ac66ca5107ebb8e98be239e5103f23dd7ab8813c2b76c6fea40-merged.mount: Deactivated successfully.
Jan 21 11:13:20 np0005590810 podman[130648]: 2026-01-21 16:13:20.76594022 +0000 UTC m=+0.256254059 container remove 9857929bc801b6e06865a8c2f41dbee78deda79cfde17ae150dd979a7a98e8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:13:20 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:13:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:20.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:20.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:20 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:13:20 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.491s CPU time.
Jan 21 11:13:21 np0005590810 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 11:13:21 np0005590810 python3.9[130818]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv0OBA2N1TAhxCdzXzrKN5DlNfOrnT3Wi6nfzeJkABVqUJeFd/SfTq8jSsLQ0pSkaOtVz+7W4LH88S0z3Nr1QfpfW4gHrJ1pT3O8Biq3Mgx7hUrKnL2cT1yKiD5Iq6T8UfNKNevEDbj0NQ+Jic0LJcUkOXatyclTAfvo8YENhy8hYnpUwaok5oAr7uw5HG4RZIj8PBGPWkSEdi4tKcGFXULERSm/K1rqhn5MOIzE3Dmvbnz3tBIzr8tAYdgXau4u4WTSBksysxWmVSk2eyhM/lvvd5TcaDGxH83eA2teAos9JkHzlxc2CXEBlGAuUlCbkJ69epl2vk9TKE87AhQhX7HGGImZ5toC6v4HVxWg95OMnE58pagea+0piEMIIxqZqMeWO6MNTXSbMTnhLPWiVUaA55u3OXGCg01yx9SLoy/bf/qXSsNv+3CZzjM2pn1JDpa2ZWcdZZ1WEmzj7z7uIIOR2M29jmSLqDYojaCwQrQ2X4H0RZ/PUgBDanAtgsAVs=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKmF8H6cVWPWJTBmu5sIvEQ1SEBiVtyh3cbexmKkjI7Z#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJOj6LbdCoXBm80G93arkMtQif0yRoMDDmGu5j1rGV2FPgXCY5k6WoAAG4AGJ49Uf/s3xvYGbnl4/h56B9Fe044=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3Iqo/i3IcHIykkN8D4X+kGokBMPdD6So6vktK/kuArfPGY9DRKQDPQTng9i8cEzg3k9G/Fw8NYCfkLPwWq3mT2vsX5CI5kacYxnTZd2e3uEbwEqIofkP+X4jxc3idj4xz6NIROh0h4ZELPLZoNr/Gws7+ZWVTlBRYYoQegDQvNVvIgQoFQg7TEFLBQ3+foQenlf/CoWRvdznwVr8Yd4lVM5MA+47Yv0lr0HoFVydahQUDb81O3hGXuTxmaYYUuwURQf6gJgalzxytF9nPuT8yx4aVsE7EHYLyMcXMioRAIyo2Ucl7tItO2I8R+NdTwwdfqBykheE/tcj3RH9CkvrNmUW4M6ttnSPBSvymxteLfANWFBDmNUp1POj/BLvHCfI3HK+tXVQQqxbTdf7jA4Y4+1Z8mXxGMsnBm+hvLZX383qQKXk86tH2o4a68WPC01j/5yXrNoutppw/5coIiBasAgYj+UDAK5Vcroyb1adwZ9NPaZ7kuhdomj1ExvosmM0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII2WIswHvg9V6rPDqJn3Fes0nz60HX3SPtnVmRIM+62w#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJtQAX9qYDZG4bYi2g9Sd+kC8/wUgucn/wABzN43Z14vseyme19Ye6/KW5wcv9xwMfGcTmL0sRtXjENcBHkixw4=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwcS0MSlu+GKAz+lDpBna25Kps4X5YW4KrOmLpWp+emFCG8fzlXBV+TxxMmBmtUiTsJO1/NaTLWNuadxcYslky2cThrxY1qAQADYCp9yLRn2OhM5+22XBsp9bNROL17hs+l5RddUQL2b1t9m0a/oRUocMv4Wy4ukc+dooKfqPSJK6VDl3MiUf8VqaJnoY5uAV84Qv5+Ku5emapmZ9va5WF+rLFumdEVTcdhhLwHxcl88xD1hNBWlfo7Bth/6ouVMa3EHFOJF8MM01l+MdGT9lGFulJnsq9xWIC0TrpuquuZGDhtL7FLcUa/UUhRjl3FIKhpIp6jHE1/qzBaIRPFR4va55U5rvOPkml/Oy9GFoHKL+o6KaAGzsQoLx4974jP8qMrCWhi6eSq6XY/cIxiNtvrdnxKrlDkT+Nh6RxYrATeUj8PpbABYgKHhPxJEh7BfNxLqqCNXW0MXw9rRxDnRqv2dhC5xPF08V5B5mmC7+gLeSqCaZrI16j8cj35LLe/5c=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN2892RP3rwefuRtkEcf8F9bZmp8LNkkHHtcAEke5aUU#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBItJYQ6JQLmwVGkkei84vuzYFf7il2vni7w9cIAKRYoy2WzAfVMVgO3nCoqO8E/cBJeFrGYRv6JSsIas6GFr9Pc=#012 create=True mode=0644 path=/tmp/ansible.dcklx2wr state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:22 np0005590810 python3.9[130973]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.dcklx2wr' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:13:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:22.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:22.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:22 np0005590810 python3.9[131127]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.dcklx2wr state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:23 np0005590810 systemd[1]: session-45.scope: Deactivated successfully.
Jan 21 11:13:23 np0005590810 systemd[1]: session-45.scope: Consumed 5.141s CPU time.
Jan 21 11:13:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:23 np0005590810 systemd-logind[795]: Session 45 logged out. Waiting for processes to exit.
Jan 21 11:13:23 np0005590810 systemd-logind[795]: Removed session 45.
Jan 21 11:13:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:13:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:13:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:24.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:24.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:13:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:25] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:13:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:25] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:13:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161326 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:13:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:26.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:26.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:26.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:13:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:26.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:13:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:13:28 np0005590810 systemd-logind[795]: New session 46 of user zuul.
Jan 21 11:13:28 np0005590810 systemd[1]: Started Session 46 of User zuul.
Jan 21 11:13:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:28.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:28.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:29 np0005590810 python3.9[131312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:13:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:13:30 np0005590810 python3.9[131469]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 11:13:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:30.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:30.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:31 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 5.
Jan 21 11:13:31 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:13:31 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.491s CPU time.
Jan 21 11:13:31 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:13:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:13:31 np0005590810 podman[131669]: 2026-01-21 16:13:31.346708076 +0000 UTC m=+0.043050851 container create 6ba055e595104d6ccff23241b85ef231ffc5015125597903e8f063bdcddbbf3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:13:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c75b5bc49f70a41336eaf0f83d4e73d91e057a119d9b23729115019aee730b5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c75b5bc49f70a41336eaf0f83d4e73d91e057a119d9b23729115019aee730b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c75b5bc49f70a41336eaf0f83d4e73d91e057a119d9b23729115019aee730b5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c75b5bc49f70a41336eaf0f83d4e73d91e057a119d9b23729115019aee730b5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:31 np0005590810 podman[131669]: 2026-01-21 16:13:31.327112091 +0000 UTC m=+0.023454886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:13:31 np0005590810 python3.9[131656]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:13:31 np0005590810 podman[131669]: 2026-01-21 16:13:31.568357008 +0000 UTC m=+0.264699803 container init 6ba055e595104d6ccff23241b85ef231ffc5015125597903e8f063bdcddbbf3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 21 11:13:31 np0005590810 podman[131669]: 2026-01-21 16:13:31.57385329 +0000 UTC m=+0.270196065 container start 6ba055e595104d6ccff23241b85ef231ffc5015125597903e8f063bdcddbbf3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:13:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:31 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:13:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:31 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:13:31 np0005590810 bash[131669]: 6ba055e595104d6ccff23241b85ef231ffc5015125597903e8f063bdcddbbf3f
Jan 21 11:13:31 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:13:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:31 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:13:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:31 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:13:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:31 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:13:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:31 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:13:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:31 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:13:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:31 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:13:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:32.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:13:33 np0005590810 python3.9[131880]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:13:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:34 np0005590810 python3.9[132033]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:13:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:34.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:13:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:13:35 np0005590810 python3.9[132187]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:13:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:13:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:35] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:13:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:35] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:13:35 np0005590810 systemd[1]: session-46.scope: Deactivated successfully.
Jan 21 11:13:35 np0005590810 systemd[1]: session-46.scope: Consumed 3.966s CPU time.
Jan 21 11:13:35 np0005590810 systemd-logind[795]: Session 46 logged out. Waiting for processes to exit.
Jan 21 11:13:35 np0005590810 systemd-logind[795]: Removed session 46.
Jan 21 11:13:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 21 11:13:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:36.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:36.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:13:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:13:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:37 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:13:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:37 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:13:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:38.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:38.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:13:39
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'backups', 'default.rgw.log', 'images', 'default.rgw.meta', '.nfs', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr']
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:13:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:13:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:13:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:13:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:40.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:13:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:40.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:13:41 np0005590810 systemd-logind[795]: New session 47 of user zuul.
Jan 21 11:13:41 np0005590810 systemd[1]: Started Session 47 of User zuul.
Jan 21 11:13:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:13:42 np0005590810 python3.9[132396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:13:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:42.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:13:43 np0005590810 python3.9[132554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:13:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:43 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:13:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:44 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d88000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:44 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d800014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:44 np0005590810 python3.9[132650]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 11:13:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:44 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d68000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:13:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:13:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:44.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:13:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:45] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:13:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:45] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:13:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:13:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161346 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:13:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:46 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d6c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:13:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:46 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d70000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:46 np0005590810 python3.9[132940]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:13:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:46 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d800021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:46.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:13:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:13:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:46.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:46.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:13:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:46.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:13:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:46.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:13:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:13:47 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:13:47 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:47 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:47 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:13:47 np0005590810 podman[133075]: 2026-01-21 16:13:47.464010624 +0000 UTC m=+0.047098015 container create d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:47 np0005590810 systemd[1]: Started libpod-conmon-d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751.scope.
Jan 21 11:13:47 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:13:47 np0005590810 podman[133075]: 2026-01-21 16:13:47.443310504 +0000 UTC m=+0.026397925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:13:47 np0005590810 podman[133075]: 2026-01-21 16:13:47.556507536 +0000 UTC m=+0.139594947 container init d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_cerf, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:13:47 np0005590810 podman[133075]: 2026-01-21 16:13:47.564411489 +0000 UTC m=+0.147498880 container start d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_cerf, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:47 np0005590810 podman[133075]: 2026-01-21 16:13:47.56821013 +0000 UTC m=+0.151297521 container attach d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:13:47 np0005590810 systemd[1]: libpod-d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751.scope: Deactivated successfully.
Jan 21 11:13:47 np0005590810 sad_cerf[133114]: 167 167
Jan 21 11:13:47 np0005590810 podman[133075]: 2026-01-21 16:13:47.574829581 +0000 UTC m=+0.157916972 container died d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:47 np0005590810 conmon[133114]: conmon d0ec4f9bc657d49110da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751.scope/container/memory.events
Jan 21 11:13:47 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5a2b4992f54ee9035a957e59cb130f0c7261e8d7f9b748deeb0cc86a45a5cf32-merged.mount: Deactivated successfully.
Jan 21 11:13:47 np0005590810 podman[133075]: 2026-01-21 16:13:47.622367009 +0000 UTC m=+0.205454400 container remove d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_cerf, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:13:47 np0005590810 systemd[1]: libpod-conmon-d0ec4f9bc657d49110da67e6ccdd152d2da72f18332d214e98cc491d342e0751.scope: Deactivated successfully.
Jan 21 11:13:47 np0005590810 podman[133166]: 2026-01-21 16:13:47.792050695 +0000 UTC m=+0.047402104 container create 3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_williams, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:13:47 np0005590810 systemd[1]: Started libpod-conmon-3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f.scope.
Jan 21 11:13:47 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:13:47 np0005590810 podman[133166]: 2026-01-21 16:13:47.771448978 +0000 UTC m=+0.026800417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:13:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bdc1090c93b4378b7b6ffd65d21bde5373fd41848418264627a4a693b76720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bdc1090c93b4378b7b6ffd65d21bde5373fd41848418264627a4a693b76720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bdc1090c93b4378b7b6ffd65d21bde5373fd41848418264627a4a693b76720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bdc1090c93b4378b7b6ffd65d21bde5373fd41848418264627a4a693b76720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bdc1090c93b4378b7b6ffd65d21bde5373fd41848418264627a4a693b76720/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:47 np0005590810 podman[133166]: 2026-01-21 16:13:47.976878144 +0000 UTC m=+0.232229583 container init 3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:13:47 np0005590810 podman[133166]: 2026-01-21 16:13:47.986115219 +0000 UTC m=+0.241466628 container start 3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:48 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d800021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:48 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:48 np0005590810 python3.9[133258]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 11:13:48 np0005590810 podman[133166]: 2026-01-21 16:13:48.247486883 +0000 UTC m=+0.502838322 container attach 3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_williams, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:13:48 np0005590810 epic_williams[133205]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:13:48 np0005590810 epic_williams[133205]: --> All data devices are unavailable
Jan 21 11:13:48 np0005590810 systemd[1]: libpod-3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f.scope: Deactivated successfully.
Jan 21 11:13:48 np0005590810 podman[133166]: 2026-01-21 16:13:48.381525401 +0000 UTC m=+0.636876820 container died 3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_williams, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 11:13:48 np0005590810 systemd[1]: var-lib-containers-storage-overlay-93bdc1090c93b4378b7b6ffd65d21bde5373fd41848418264627a4a693b76720-merged.mount: Deactivated successfully.
Jan 21 11:13:48 np0005590810 podman[133166]: 2026-01-21 16:13:48.646189769 +0000 UTC m=+0.901541178 container remove 3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:48 np0005590810 systemd[1]: libpod-conmon-3ed0b65186e54980ed03b7c2e246ed28e1c1e37bc655d089a6ed3bde83ff841f.scope: Deactivated successfully.
Jan 21 11:13:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:48 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d70001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:48.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:48.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:48 np0005590810 python3.9[133481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:13:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:49 np0005590810 podman[133548]: 2026-01-21 16:13:49.199839391 +0000 UTC m=+0.023687827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:13:49 np0005590810 podman[133548]: 2026-01-21 16:13:49.296736945 +0000 UTC m=+0.120585361 container create 2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:13:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:13:49 np0005590810 systemd[1]: Started libpod-conmon-2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162.scope.
Jan 21 11:13:49 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:13:49 np0005590810 podman[133548]: 2026-01-21 16:13:49.444521972 +0000 UTC m=+0.268370408 container init 2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_keller, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:13:49 np0005590810 podman[133548]: 2026-01-21 16:13:49.451075021 +0000 UTC m=+0.274923437 container start 2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_keller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:13:49 np0005590810 kind_keller[133629]: 167 167
Jan 21 11:13:49 np0005590810 systemd[1]: libpod-2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162.scope: Deactivated successfully.
Jan 21 11:13:49 np0005590810 conmon[133629]: conmon 2ef9e1384d5ad5e4d91f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162.scope/container/memory.events
Jan 21 11:13:49 np0005590810 podman[133548]: 2026-01-21 16:13:49.477130592 +0000 UTC m=+0.300979008 container attach 2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_keller, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 21 11:13:49 np0005590810 podman[133548]: 2026-01-21 16:13:49.478010601 +0000 UTC m=+0.301859017 container died 2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 11:13:49 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a8bb0d78d9850d8e0d54f62c3e8cb3244c746194da6b22c990c4ae0a36ecbffd-merged.mount: Deactivated successfully.
Jan 21 11:13:49 np0005590810 podman[133548]: 2026-01-21 16:13:49.585645997 +0000 UTC m=+0.409494413 container remove 2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:13:49 np0005590810 systemd[1]: libpod-conmon-2ef9e1384d5ad5e4d91f66f36d7f0df645fb14b8782121be122dce76dd5a7162.scope: Deactivated successfully.
Jan 21 11:13:49 np0005590810 python3.9[133705]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:13:49 np0005590810 podman[133714]: 2026-01-21 16:13:49.763347558 +0000 UTC m=+0.069888171 container create 1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 11:13:49 np0005590810 systemd[1]: Started libpod-conmon-1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1.scope.
Jan 21 11:13:49 np0005590810 podman[133714]: 2026-01-21 16:13:49.717187695 +0000 UTC m=+0.023728308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:13:49 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:13:49 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1b65783719872e1e17015d286e24025287948bbacea22fd6e93c7728e9ee27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:49 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1b65783719872e1e17015d286e24025287948bbacea22fd6e93c7728e9ee27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:49 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1b65783719872e1e17015d286e24025287948bbacea22fd6e93c7728e9ee27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:49 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1b65783719872e1e17015d286e24025287948bbacea22fd6e93c7728e9ee27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:49 np0005590810 podman[133714]: 2026-01-21 16:13:49.851642947 +0000 UTC m=+0.158183580 container init 1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:13:49 np0005590810 podman[133714]: 2026-01-21 16:13:49.859585251 +0000 UTC m=+0.166125864 container start 1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:13:49 np0005590810 podman[133714]: 2026-01-21 16:13:49.896824139 +0000 UTC m=+0.203364752 container attach 1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:13:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:50 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d6c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:50 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d800021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:50 np0005590810 serene_lalande[133754]: {
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:    "0": [
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:        {
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "devices": [
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "/dev/loop3"
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            ],
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "lv_name": "ceph_lv0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "lv_size": "21470642176",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "name": "ceph_lv0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "tags": {
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.cluster_name": "ceph",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.crush_device_class": "",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.encrypted": "0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.osd_id": "0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.type": "block",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.vdo": "0",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:                "ceph.with_tpm": "0"
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            },
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "type": "block",
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:            "vg_name": "ceph_vg0"
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:        }
Jan 21 11:13:50 np0005590810 serene_lalande[133754]:    ]
Jan 21 11:13:50 np0005590810 serene_lalande[133754]: }
Jan 21 11:13:50 np0005590810 systemd[1]: libpod-1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1.scope: Deactivated successfully.
Jan 21 11:13:50 np0005590810 podman[133714]: 2026-01-21 16:13:50.181057002 +0000 UTC m=+0.487597615 container died 1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 21 11:13:50 np0005590810 systemd[1]: session-47.scope: Deactivated successfully.
Jan 21 11:13:50 np0005590810 systemd[1]: session-47.scope: Consumed 5.974s CPU time.
Jan 21 11:13:50 np0005590810 systemd-logind[795]: Session 47 logged out. Waiting for processes to exit.
Jan 21 11:13:50 np0005590810 systemd-logind[795]: Removed session 47.
Jan 21 11:13:50 np0005590810 systemd[1]: var-lib-containers-storage-overlay-fd1b65783719872e1e17015d286e24025287948bbacea22fd6e93c7728e9ee27-merged.mount: Deactivated successfully.
Jan 21 11:13:50 np0005590810 podman[133714]: 2026-01-21 16:13:50.431171006 +0000 UTC m=+0.737711609 container remove 1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:13:50 np0005590810 systemd[1]: libpod-conmon-1398021b73250267c9f700bd1ff398da387770deaaee90471a4bba3e1bcd5fe1.scope: Deactivated successfully.
Jan 21 11:13:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:50 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d800021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:13:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:50.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:13:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:50.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:51 np0005590810 podman[133864]: 2026-01-21 16:13:51.036485756 +0000 UTC m=+0.039187592 container create a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_chatterjee, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:51 np0005590810 systemd[1]: Started libpod-conmon-a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac.scope.
Jan 21 11:13:51 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:13:51 np0005590810 podman[133864]: 2026-01-21 16:13:51.106007506 +0000 UTC m=+0.108709362 container init a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 21 11:13:51 np0005590810 podman[133864]: 2026-01-21 16:13:51.11304534 +0000 UTC m=+0.115747176 container start a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 11:13:51 np0005590810 podman[133864]: 2026-01-21 16:13:51.019232436 +0000 UTC m=+0.021934292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:13:51 np0005590810 podman[133864]: 2026-01-21 16:13:51.11616036 +0000 UTC m=+0.118862256 container attach a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:51 np0005590810 unruffled_chatterjee[133880]: 167 167
Jan 21 11:13:51 np0005590810 systemd[1]: libpod-a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac.scope: Deactivated successfully.
Jan 21 11:13:51 np0005590810 podman[133864]: 2026-01-21 16:13:51.117453761 +0000 UTC m=+0.120155597 container died a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:13:51 np0005590810 systemd[1]: var-lib-containers-storage-overlay-af925ca1d9f70a5d17395346961cf1921c37f2904783ff497460b56f92419066-merged.mount: Deactivated successfully.
Jan 21 11:13:51 np0005590810 podman[133864]: 2026-01-21 16:13:51.157674855 +0000 UTC m=+0.160376691 container remove a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_chatterjee, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 21 11:13:51 np0005590810 systemd[1]: libpod-conmon-a28bbed456c58ab9efc14792f4af9bd66f0dbd405d194afcd4d2a769a9739bac.scope: Deactivated successfully.
Jan 21 11:13:51 np0005590810 podman[133907]: 2026-01-21 16:13:51.312773056 +0000 UTC m=+0.046063622 container create 430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:13:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:13:51 np0005590810 systemd[1]: Started libpod-conmon-430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967.scope.
Jan 21 11:13:51 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:13:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a59777c0f6d86f92ede99c8f29571b6c1d9cf94d36d3c23c5712b19859168bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a59777c0f6d86f92ede99c8f29571b6c1d9cf94d36d3c23c5712b19859168bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a59777c0f6d86f92ede99c8f29571b6c1d9cf94d36d3c23c5712b19859168bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:51 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a59777c0f6d86f92ede99c8f29571b6c1d9cf94d36d3c23c5712b19859168bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:13:51 np0005590810 podman[133907]: 2026-01-21 16:13:51.290460714 +0000 UTC m=+0.023751300 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:13:51 np0005590810 podman[133907]: 2026-01-21 16:13:51.389390271 +0000 UTC m=+0.122680857 container init 430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:51 np0005590810 podman[133907]: 2026-01-21 16:13:51.396086645 +0000 UTC m=+0.129377211 container start 430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:13:51 np0005590810 podman[133907]: 2026-01-21 16:13:51.399129362 +0000 UTC m=+0.132419958 container attach 430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wing, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:13:52 np0005590810 lvm[133997]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:13:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:52 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d70001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:52 np0005590810 lvm[133997]: VG ceph_vg0 finished
Jan 21 11:13:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:52 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d6c001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:52 np0005590810 focused_wing[133923]: {}
Jan 21 11:13:52 np0005590810 systemd[1]: libpod-430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967.scope: Deactivated successfully.
Jan 21 11:13:52 np0005590810 systemd[1]: libpod-430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967.scope: Consumed 1.205s CPU time.
Jan 21 11:13:52 np0005590810 podman[133907]: 2026-01-21 16:13:52.127954626 +0000 UTC m=+0.861245212 container died 430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wing, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 21 11:13:52 np0005590810 systemd[1]: var-lib-containers-storage-overlay-4a59777c0f6d86f92ede99c8f29571b6c1d9cf94d36d3c23c5712b19859168bf-merged.mount: Deactivated successfully.
Jan 21 11:13:52 np0005590810 podman[133907]: 2026-01-21 16:13:52.677661863 +0000 UTC m=+1.410952429 container remove 430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wing, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:13:52 np0005590810 systemd[1]: libpod-conmon-430208423a3da503d9275a7137a107f0619f39ec29acb4ac5b734c2f2989a967.scope: Deactivated successfully.
Jan 21 11:13:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:13:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:13:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:52 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:13:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:13:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:52.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:13:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:52.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:13:53 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:53 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:13:54 np0005590810 kernel: ganesha.nfsd[132566]: segfault at 50 ip 00007f3e1273832e sp 00007f3db57f9210 error 4 in libntirpc.so.5.8[7f3e1271d000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 21 11:13:54 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:13:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[131684]: 21/01/2026 16:13:54 : epoch 6970fb2b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d800021d0 fd 38 proxy ignored for local
Jan 21 11:13:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:54 np0005590810 systemd[1]: Started Process Core Dump (PID 134041/UID 0).
Jan 21 11:13:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:13:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:13:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:54.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:54.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:13:55 np0005590810 systemd-logind[795]: New session 48 of user zuul.
Jan 21 11:13:55 np0005590810 systemd[1]: Started Session 48 of User zuul.
Jan 21 11:13:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:55] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 21 11:13:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:13:55] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 21 11:13:55 np0005590810 systemd-coredump[134042]: Process 131689 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f3e1273832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:13:55 np0005590810 systemd[1]: systemd-coredump@5-134041-0.service: Deactivated successfully.
Jan 21 11:13:55 np0005590810 systemd[1]: systemd-coredump@5-134041-0.service: Consumed 1.371s CPU time.
Jan 21 11:13:55 np0005590810 podman[134105]: 2026-01-21 16:13:55.777393096 +0000 UTC m=+0.030714671 container died 6ba055e595104d6ccff23241b85ef231ffc5015125597903e8f063bdcddbbf3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:13:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6c75b5bc49f70a41336eaf0f83d4e73d91e057a119d9b23729115019aee730b5-merged.mount: Deactivated successfully.
Jan 21 11:13:55 np0005590810 podman[134105]: 2026-01-21 16:13:55.94139468 +0000 UTC m=+0.194716246 container remove 6ba055e595104d6ccff23241b85ef231ffc5015125597903e8f063bdcddbbf3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:13:55 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:13:56 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:13:56 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.446s CPU time.
Jan 21 11:13:56 np0005590810 python3.9[134268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:13:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:56.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:13:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:56.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:13:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:56.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:13:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:13:56.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:13:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:58 np0005590810 python3.9[134426]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.460306) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012038460390, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 967, "num_deletes": 251, "total_data_size": 1646331, "memory_usage": 1670224, "flush_reason": "Manual Compaction"}
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012038470520, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1591228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12175, "largest_seqno": 13141, "table_properties": {"data_size": 1586576, "index_size": 2240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10037, "raw_average_key_size": 19, "raw_value_size": 1577199, "raw_average_value_size": 2998, "num_data_blocks": 101, "num_entries": 526, "num_filter_entries": 526, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011959, "oldest_key_time": 1769011959, "file_creation_time": 1769012038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10223 microseconds, and 4074 cpu microseconds.
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.470557) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1591228 bytes OK
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.470572) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.472727) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.472741) EVENT_LOG_v1 {"time_micros": 1769012038472737, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.472757) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1641831, prev total WAL file size 1641831, number of live WAL files 2.
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.473293) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1553KB)], [29(11MB)]
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012038473336, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 13556268, "oldest_snapshot_seqno": -1}
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4175 keys, 11600548 bytes, temperature: kUnknown
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012038562869, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11600548, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11569960, "index_size": 19022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 106878, "raw_average_key_size": 25, "raw_value_size": 11491061, "raw_average_value_size": 2752, "num_data_blocks": 802, "num_entries": 4175, "num_filter_entries": 4175, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769012038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.563098) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11600548 bytes
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.564319) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.3 rd, 129.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(15.8) write-amplify(7.3) OK, records in: 4691, records dropped: 516 output_compression: NoCompression
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.564336) EVENT_LOG_v1 {"time_micros": 1769012038564328, "job": 12, "event": "compaction_finished", "compaction_time_micros": 89605, "compaction_time_cpu_micros": 23946, "output_level": 6, "num_output_files": 1, "total_output_size": 11600548, "num_input_records": 4691, "num_output_records": 4175, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012038564815, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012038567524, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.473199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.567614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.567621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.567623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.567625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:13:58 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:13:58.567626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:13:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:13:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:13:58.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:13:58 np0005590810 python3.9[134578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:13:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:13:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:13:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:13:58.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:13:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:13:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:13:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161359 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:13:59 np0005590810 python3.9[134732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161400 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:14:00 np0005590810 python3.9[134855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012039.0588775-152-147561662436036/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f87d8def2761b2fd367c98229a12dbec644433c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:00.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:00 np0005590810 python3.9[135008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:14:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:00.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:14:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:14:01 np0005590810 python3.9[135132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012040.4793973-152-245670676732361/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2b3f72e2c70fb86ce27ab7b778355f2704f0a21d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:02 np0005590810 python3.9[135284]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:02 np0005590810 python3.9[135407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012041.6440716-152-240805624526098/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=92f3f5490b00620d194a07731299261ce1eeaa9b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:02.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:02.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:14:03 np0005590810 python3.9[135561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:03 np0005590810 python3.9[135713]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:04 np0005590810 python3.9[135865]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:04.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:04.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:05 np0005590810 python3.9[135989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012044.0993884-332-220226718367010/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1e9ba1e74889462fbe39f5e53b6f727a45ab821d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:05] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 21 11:14:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:05] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 21 11:14:05 np0005590810 python3.9[136142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:06 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 6.
Jan 21 11:14:06 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:14:06 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.446s CPU time.
Jan 21 11:14:06 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:14:06 np0005590810 python3.9[136265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012045.2033432-332-193205013007459/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=dace86199c83e0cd219262cd6a27425c4038cb65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:06 np0005590810 podman[136353]: 2026-01-21 16:14:06.300570524 +0000 UTC m=+0.039577734 container create 8b47c35bea0f357653679afd5a66bed95cac7d3dc8560753afa8b6935c6b89ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:14:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3122259f2a7707b5308e0750198f6798db60e09e083a84af7a48e6325d03cb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3122259f2a7707b5308e0750198f6798db60e09e083a84af7a48e6325d03cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3122259f2a7707b5308e0750198f6798db60e09e083a84af7a48e6325d03cb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3122259f2a7707b5308e0750198f6798db60e09e083a84af7a48e6325d03cb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:06 np0005590810 podman[136353]: 2026-01-21 16:14:06.360361513 +0000 UTC m=+0.099368743 container init 8b47c35bea0f357653679afd5a66bed95cac7d3dc8560753afa8b6935c6b89ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 11:14:06 np0005590810 podman[136353]: 2026-01-21 16:14:06.365564719 +0000 UTC m=+0.104571929 container start 8b47c35bea0f357653679afd5a66bed95cac7d3dc8560753afa8b6935c6b89ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:14:06 np0005590810 bash[136353]: 8b47c35bea0f357653679afd5a66bed95cac7d3dc8560753afa8b6935c6b89ab
Jan 21 11:14:06 np0005590810 podman[136353]: 2026-01-21 16:14:06.281832086 +0000 UTC m=+0.020839326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:14:06 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:14:06 np0005590810 python3.9[136516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:14:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:06.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:14:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:06.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:06.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:14:07 np0005590810 python3.9[136640]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012046.2613578-332-56573976905744/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d6758bc37f00a04d66ed4c887eab2492a92dfacc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:07 np0005590810 python3.9[136793]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:08 np0005590810 python3.9[136945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:14:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:08.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:14:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:08.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:09 np0005590810 python3.9[137098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:14:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:14:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:14:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:14:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:14:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:14:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:14:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:14:09 np0005590810 python3.9[137222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012048.705264-495-58763355670348/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=70cb0cfc7a85035cefd72526e305f320b484ad91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:10 np0005590810 python3.9[137374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:10 np0005590810 python3.9[137497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012049.856545-495-187647749655897/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=dace86199c83e0cd219262cd6a27425c4038cb65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:10.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:10.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 21 11:14:11 np0005590810 python3.9[137651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:11 np0005590810 python3.9[137774]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012050.987901-495-85723865012366/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6074310ac247055cdc2eb801d3a08bb3dd9cd88a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:12 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:14:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:12 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:14:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:12.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:12.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:13 np0005590810 python3.9[137927]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 21 11:14:13 np0005590810 python3.9[138080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:14 np0005590810 python3.9[138203]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012053.3162613-682-49933340558097/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1cea5a8eed1224d858018fe9be73f8229d34ef3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:14.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:14.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:14:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:15] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:14:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:15] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 21 11:14:15 np0005590810 python3.9[138357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:16 np0005590810 python3.9[138534]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:16 np0005590810 python3.9[138657]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012055.803455-777-4947572883178/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1cea5a8eed1224d858018fe9be73f8229d34ef3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:14:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:16.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:14:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:16.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:16.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:14:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:14:17 np0005590810 python3.9[138811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:18 np0005590810 python3.9[138963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:14:18 np0005590810 python3.9[139099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012057.7083216-849-6406248951301/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1cea5a8eed1224d858018fe9be73f8229d34ef3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2a4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:18.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:18.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:14:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161419 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:14:19 np0005590810 python3.9[139254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294001240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:20 np0005590810 python3.9[139406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:20 np0005590810 python3.9[139529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012059.6706529-922-126861898300592/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1cea5a8eed1224d858018fe9be73f8229d34ef3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:20.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:20.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161420 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:14:21 np0005590810 python3.9[139683]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:14:21 np0005590810 python3.9[139835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161422 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:14:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:22 np0005590810 python3.9[139958]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012061.477684-988-134933811106088/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1cea5a8eed1224d858018fe9be73f8229d34ef3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:22.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:22.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:23 np0005590810 python3.9[140111]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:14:23 np0005590810 python3.9[140264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:14:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:14:24 np0005590810 python3.9[140387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012063.3199377-1059-259779236389334/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1cea5a8eed1224d858018fe9be73f8229d34ef3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:24.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:24.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:14:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:25] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:14:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:25] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:14:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:26 np0005590810 systemd[1]: session-48.scope: Deactivated successfully.
Jan 21 11:14:26 np0005590810 systemd[1]: session-48.scope: Consumed 22.622s CPU time.
Jan 21 11:14:26 np0005590810 systemd-logind[795]: Session 48 logged out. Waiting for processes to exit.
Jan 21 11:14:26 np0005590810 systemd-logind[795]: Removed session 48.
Jan 21 11:14:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:26.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:26.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:26.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:14:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:26.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:14:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:28.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:28.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:14:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:30.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:30.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Jan 21 11:14:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:32 np0005590810 systemd-logind[795]: New session 49 of user zuul.
Jan 21 11:14:32 np0005590810 systemd[1]: Started Session 49 of User zuul.
Jan 21 11:14:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:32.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:32 np0005590810 python3.9[140575]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:14:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:32.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:14:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:33 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:14:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:33 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:14:33 np0005590810 python3.9[140729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:34 np0005590810 python3.9[140852]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012073.1553292-57-110409583768355/.source.conf _original_basename=ceph.conf follow=False checksum=7a21bcc031482b981a166f55f168620b322e7511 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:34.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:34.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:35 np0005590810 python3.9[141005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:14:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:14:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:35] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:14:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:35] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:14:35 np0005590810 python3.9[141129]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012074.6189494-57-133375245123755/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=2ea395d6108431abaf3eb9a42be6b8fa8c96438d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:36 np0005590810 systemd[1]: session-49.scope: Deactivated successfully.
Jan 21 11:14:36 np0005590810 systemd[1]: session-49.scope: Consumed 2.721s CPU time.
Jan 21 11:14:36 np0005590810 systemd-logind[795]: Session 49 logged out. Waiting for processes to exit.
Jan 21 11:14:36 np0005590810 systemd-logind[795]: Removed session 49.
Jan 21 11:14:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:14:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:36.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:36.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:14:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:37.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:14:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:38.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:39.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:14:39
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['.nfs', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'vms', 'backups', '.mgr', 'default.rgw.meta']
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:14:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:14:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:14:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:14:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:40.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:14:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:41.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:14:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:14:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:14:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 7419 writes, 30K keys, 7419 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7419 writes, 1308 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7419 writes, 30K keys, 7419 commit groups, 1.0 writes per commit group, ingest: 20.55 MB, 0.03 MB/s#012Interval WAL: 7419 writes, 1308 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 21 11:14:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:42 np0005590810 systemd-logind[795]: New session 50 of user zuul.
Jan 21 11:14:42 np0005590810 systemd[1]: Started Session 50 of User zuul.
Jan 21 11:14:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:42.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161443 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:14:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:43.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:14:43 np0005590810 python3.9[141340]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:14:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:44 np0005590810 python3.9[141496]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:44.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:45.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:14:45 np0005590810 python3.9[141650]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:14:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:45] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 21 11:14:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:45] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 21 11:14:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:46 np0005590810 python3.9[141800]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:14:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:46.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:14:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:46.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:14:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:47 np0005590810 python3.9[141954]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 21 11:14:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:48.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:49.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:50.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:51.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:14:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290000f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:52 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 21 11:14:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:52.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:14:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:53.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:14:53 np0005590810 python3.9[142117]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:14:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:14:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:14:53 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:14:53 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:14:53 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:14:53 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 python3.9[142283]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:54 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:14:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:54.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:54 np0005590810 podman[142374]: 2026-01-21 16:14:54.95066564 +0000 UTC m=+0.044248887 container create 0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lamarr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:14:54 np0005590810 systemd[1]: Started libpod-conmon-0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761.scope.
Jan 21 11:14:55 np0005590810 podman[142374]: 2026-01-21 16:14:54.92971164 +0000 UTC m=+0.023294907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:14:55 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:14:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:55.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:55 np0005590810 podman[142374]: 2026-01-21 16:14:55.042067278 +0000 UTC m=+0.135650545 container init 0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lamarr, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:14:55 np0005590810 podman[142374]: 2026-01-21 16:14:55.050395149 +0000 UTC m=+0.143978396 container start 0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lamarr, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 21 11:14:55 np0005590810 podman[142374]: 2026-01-21 16:14:55.0538097 +0000 UTC m=+0.147392947 container attach 0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lamarr, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:14:55 np0005590810 kind_lamarr[142391]: 167 167
Jan 21 11:14:55 np0005590810 systemd[1]: libpod-0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761.scope: Deactivated successfully.
Jan 21 11:14:55 np0005590810 podman[142374]: 2026-01-21 16:14:55.060244738 +0000 UTC m=+0.153827985 container died 0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:14:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ac18c4a51150cc3b5ed8092eeac168594641d9ce69bd069e72151d9007b78797-merged.mount: Deactivated successfully.
Jan 21 11:14:55 np0005590810 podman[142374]: 2026-01-21 16:14:55.101325482 +0000 UTC m=+0.194908729 container remove 0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lamarr, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:14:55 np0005590810 systemd[1]: libpod-conmon-0dabc04cfd85306a8e92697043f8a160123189da38a8a8f07d56e1e1c6c97761.scope: Deactivated successfully.
Jan 21 11:14:55 np0005590810 podman[142416]: 2026-01-21 16:14:55.257895756 +0000 UTC m=+0.040774496 container create 2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:14:55 np0005590810 systemd[1]: Started libpod-conmon-2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b.scope.
Jan 21 11:14:55 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:14:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5434b31fba447aca688df39a6cd60c511f3ee0037265f65d0fd9a8caadcf83ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5434b31fba447aca688df39a6cd60c511f3ee0037265f65d0fd9a8caadcf83ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5434b31fba447aca688df39a6cd60c511f3ee0037265f65d0fd9a8caadcf83ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5434b31fba447aca688df39a6cd60c511f3ee0037265f65d0fd9a8caadcf83ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5434b31fba447aca688df39a6cd60c511f3ee0037265f65d0fd9a8caadcf83ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:55 np0005590810 podman[142416]: 2026-01-21 16:14:55.240241222 +0000 UTC m=+0.023119982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:14:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:14:55 np0005590810 podman[142416]: 2026-01-21 16:14:55.359112661 +0000 UTC m=+0.141991431 container init 2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:14:55 np0005590810 podman[142416]: 2026-01-21 16:14:55.368413293 +0000 UTC m=+0.151292033 container start 2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:14:55 np0005590810 podman[142416]: 2026-01-21 16:14:55.371873196 +0000 UTC m=+0.154751936 container attach 2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:14:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161455 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:14:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:55] "GET /metrics HTTP/1.1" 200 48193 "" "Prometheus/2.51.0"
Jan 21 11:14:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:14:55] "GET /metrics HTTP/1.1" 200 48193 "" "Prometheus/2.51.0"
Jan 21 11:14:55 np0005590810 recursing_boyd[142433]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:14:55 np0005590810 recursing_boyd[142433]: --> All data devices are unavailable
Jan 21 11:14:55 np0005590810 systemd[1]: libpod-2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b.scope: Deactivated successfully.
Jan 21 11:14:55 np0005590810 podman[142472]: 2026-01-21 16:14:55.765380531 +0000 UTC m=+0.030781020 container died 2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 21 11:14:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5434b31fba447aca688df39a6cd60c511f3ee0037265f65d0fd9a8caadcf83ed-merged.mount: Deactivated successfully.
Jan 21 11:14:55 np0005590810 podman[142472]: 2026-01-21 16:14:55.817574967 +0000 UTC m=+0.082975436 container remove 2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:14:55 np0005590810 systemd[1]: libpod-conmon-2dce004873b3da4b39c31cdf19e29fb9589152bd127c4dca154049d1533d2a0b.scope: Deactivated successfully.
Jan 21 11:14:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:56 np0005590810 podman[142678]: 2026-01-21 16:14:56.433709891 +0000 UTC m=+0.056497226 container create da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:14:56 np0005590810 systemd[1]: Started libpod-conmon-da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec.scope.
Jan 21 11:14:56 np0005590810 podman[142678]: 2026-01-21 16:14:56.402792867 +0000 UTC m=+0.025580232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:14:56 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:14:56 np0005590810 podman[142678]: 2026-01-21 16:14:56.526864564 +0000 UTC m=+0.149651929 container init da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:14:56 np0005590810 podman[142678]: 2026-01-21 16:14:56.534476522 +0000 UTC m=+0.157263867 container start da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ritchie, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:14:56 np0005590810 systemd[1]: libpod-da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec.scope: Deactivated successfully.
Jan 21 11:14:56 np0005590810 optimistic_ritchie[142718]: 167 167
Jan 21 11:14:56 np0005590810 conmon[142718]: conmon da2ad7182ea4ffbe1f12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec.scope/container/memory.events
Jan 21 11:14:56 np0005590810 podman[142678]: 2026-01-21 16:14:56.55598056 +0000 UTC m=+0.178767935 container attach da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:14:56 np0005590810 podman[142678]: 2026-01-21 16:14:56.556453616 +0000 UTC m=+0.179240961 container died da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ritchie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:14:56 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7126744e6c4ed2d33dd616242578b7ea0f663827ba833b4a513a0810aa17ccb9-merged.mount: Deactivated successfully.
Jan 21 11:14:56 np0005590810 podman[142678]: 2026-01-21 16:14:56.617717335 +0000 UTC m=+0.240504680 container remove da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:14:56 np0005590810 systemd[1]: libpod-conmon-da2ad7182ea4ffbe1f122f16395115035257da9c5e64098c3cca69bd7b3838ec.scope: Deactivated successfully.
Jan 21 11:14:56 np0005590810 podman[142774]: 2026-01-21 16:14:56.812152347 +0000 UTC m=+0.074026814 container create 851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_panini, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:14:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:56 np0005590810 systemd[1]: Started libpod-conmon-851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02.scope.
Jan 21 11:14:56 np0005590810 podman[142774]: 2026-01-21 16:14:56.764895773 +0000 UTC m=+0.026770260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:14:56 np0005590810 python3.9[142752]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 11:14:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:56.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:56 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:14:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3cf21800009cd00bdf2464f8c5d1f08476a10fd6f2fe34e28d7911b55380fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3cf21800009cd00bdf2464f8c5d1f08476a10fd6f2fe34e28d7911b55380fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3cf21800009cd00bdf2464f8c5d1f08476a10fd6f2fe34e28d7911b55380fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:56 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3cf21800009cd00bdf2464f8c5d1f08476a10fd6f2fe34e28d7911b55380fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:56 np0005590810 podman[142774]: 2026-01-21 16:14:56.916173584 +0000 UTC m=+0.178048081 container init 851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_panini, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:14:56 np0005590810 podman[142774]: 2026-01-21 16:14:56.926420208 +0000 UTC m=+0.188294675 container start 851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_panini, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:14:56 np0005590810 podman[142774]: 2026-01-21 16:14:56.963976306 +0000 UTC m=+0.225850773 container attach 851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_panini, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:14:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:56.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:14:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:14:56.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:14:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:57.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:57 np0005590810 crazy_panini[142792]: {
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:    "0": [
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:        {
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "devices": [
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "/dev/loop3"
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            ],
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "lv_name": "ceph_lv0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "lv_size": "21470642176",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "name": "ceph_lv0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "tags": {
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.cluster_name": "ceph",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.crush_device_class": "",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.encrypted": "0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.osd_id": "0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.type": "block",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.vdo": "0",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:                "ceph.with_tpm": "0"
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            },
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "type": "block",
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:            "vg_name": "ceph_vg0"
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:        }
Jan 21 11:14:57 np0005590810 crazy_panini[142792]:    ]
Jan 21 11:14:57 np0005590810 crazy_panini[142792]: }
Jan 21 11:14:57 np0005590810 systemd[1]: libpod-851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02.scope: Deactivated successfully.
Jan 21 11:14:57 np0005590810 podman[142774]: 2026-01-21 16:14:57.266954923 +0000 UTC m=+0.528829390 container died 851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:14:57 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ef3cf21800009cd00bdf2464f8c5d1f08476a10fd6f2fe34e28d7911b55380fb-merged.mount: Deactivated successfully.
Jan 21 11:14:57 np0005590810 podman[142774]: 2026-01-21 16:14:57.320484031 +0000 UTC m=+0.582358498 container remove 851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_panini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 21 11:14:57 np0005590810 systemd[1]: libpod-conmon-851cafca61610c340093a54794111d80ac0d43d0c8bee1a2cfc32d2a69f27d02.scope: Deactivated successfully.
Jan 21 11:14:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:14:57 np0005590810 python3[143017]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 21 11:14:57 np0005590810 podman[143057]: 2026-01-21 16:14:57.91398734 +0000 UTC m=+0.046412448 container create 1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Jan 21 11:14:57 np0005590810 systemd[1]: Started libpod-conmon-1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af.scope.
Jan 21 11:14:57 np0005590810 podman[143057]: 2026-01-21 16:14:57.890018312 +0000 UTC m=+0.022443420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:14:57 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:14:58 np0005590810 podman[143057]: 2026-01-21 16:14:58.013138909 +0000 UTC m=+0.145564037 container init 1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_herschel, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:14:58 np0005590810 podman[143057]: 2026-01-21 16:14:58.024292821 +0000 UTC m=+0.156717929 container start 1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 21 11:14:58 np0005590810 podman[143057]: 2026-01-21 16:14:58.031248327 +0000 UTC m=+0.163673435 container attach 1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_herschel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Jan 21 11:14:58 np0005590810 elegant_herschel[143096]: 167 167
Jan 21 11:14:58 np0005590810 systemd[1]: libpod-1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af.scope: Deactivated successfully.
Jan 21 11:14:58 np0005590810 podman[143057]: 2026-01-21 16:14:58.033074246 +0000 UTC m=+0.165499354 container died 1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_herschel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 11:14:58 np0005590810 systemd[1]: var-lib-containers-storage-overlay-90fa93c2405dba77454f9e655afcac2a36de5de641671f7767228693d58b1844-merged.mount: Deactivated successfully.
Jan 21 11:14:58 np0005590810 podman[143057]: 2026-01-21 16:14:58.078899984 +0000 UTC m=+0.211325092 container remove 1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 11:14:58 np0005590810 systemd[1]: libpod-conmon-1424cd5018156c3f2cb4f5724030142a131f8dacf17d3ae246eb93f9fc0915af.scope: Deactivated successfully.
Jan 21 11:14:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:58 np0005590810 podman[143149]: 2026-01-21 16:14:58.24078323 +0000 UTC m=+0.045360254 container create 9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_colden, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:14:58 np0005590810 systemd[1]: Started libpod-conmon-9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08.scope.
Jan 21 11:14:58 np0005590810 podman[143149]: 2026-01-21 16:14:58.221307748 +0000 UTC m=+0.025884792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:14:58 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:14:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e751197527e0869520b235639ace0f842acce4ea13587d7f42d3e61a1a3b05d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e751197527e0869520b235639ace0f842acce4ea13587d7f42d3e61a1a3b05d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e751197527e0869520b235639ace0f842acce4ea13587d7f42d3e61a1a3b05d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e751197527e0869520b235639ace0f842acce4ea13587d7f42d3e61a1a3b05d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:14:58 np0005590810 podman[143149]: 2026-01-21 16:14:58.343369482 +0000 UTC m=+0.147946526 container init 9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_colden, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:14:58 np0005590810 podman[143149]: 2026-01-21 16:14:58.352301121 +0000 UTC m=+0.156878145 container start 9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 21 11:14:58 np0005590810 podman[143149]: 2026-01-21 16:14:58.356153636 +0000 UTC m=+0.160730680 container attach 9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_colden, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:14:58 np0005590810 python3.9[143270]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:14:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:14:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:14:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:14:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:14:58.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:14:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:14:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:14:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:14:59.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:14:59 np0005590810 lvm[143417]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:14:59 np0005590810 lvm[143417]: VG ceph_vg0 finished
Jan 21 11:14:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:14:59 np0005590810 recursing_colden[143213]: {}
Jan 21 11:14:59 np0005590810 systemd[1]: libpod-9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08.scope: Deactivated successfully.
Jan 21 11:14:59 np0005590810 systemd[1]: libpod-9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08.scope: Consumed 1.287s CPU time.
Jan 21 11:14:59 np0005590810 podman[143149]: 2026-01-21 16:14:59.188039695 +0000 UTC m=+0.992616719 container died 9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 11:14:59 np0005590810 systemd[1]: var-lib-containers-storage-overlay-e751197527e0869520b235639ace0f842acce4ea13587d7f42d3e61a1a3b05d6-merged.mount: Deactivated successfully.
Jan 21 11:14:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:14:59 np0005590810 podman[143149]: 2026-01-21 16:14:59.452540912 +0000 UTC m=+1.257117936 container remove 9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:14:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:14:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:14:59 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:14:59 np0005590810 systemd[1]: libpod-conmon-9b71bd5ac31d84c2c2469cb5ffddf35b318a4ee90e9d6a6d040ea50352b3fc08.scope: Deactivated successfully.
Jan 21 11:14:59 np0005590810 python3.9[143508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:00 np0005590810 python3.9[143613]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:00 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:15:00 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:15:00 np0005590810 python3.9[143765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:00.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:01 np0005590810 python3.9[143845]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.bl3gh_eu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:15:02 np0005590810 python3.9[143997]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:02 np0005590810 python3.9[144075]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:02.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:03.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:15:03 np0005590810 python3.9[144229]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:15:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:04 np0005590810 python3[144382]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 11:15:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:04 np0005590810 python3.9[144535]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:05.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:15:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:05] "GET /metrics HTTP/1.1" 200 48193 "" "Prometheus/2.51.0"
Jan 21 11:15:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:05] "GET /metrics HTTP/1.1" 200 48193 "" "Prometheus/2.51.0"
Jan 21 11:15:05 np0005590810 python3.9[144661]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012104.462396-426-58330582237865/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:05 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:15:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:06 np0005590810 python3.9[144813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:06.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:15:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:07.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:15:07 np0005590810 python3.9[144940]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012105.94217-471-102850764280504/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290002f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:08 np0005590810 python3.9[145092]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:08 np0005590810 python3.9[145217]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012107.6435869-516-90062272728319/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:15:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:15:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:08.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161509 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:15:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:15:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:09.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:15:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:15:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:15:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:15:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:15:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:15:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:15:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:15:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:15:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:15:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:10 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:10 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:10 np0005590810 python3.9[145371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:10 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:10 np0005590810 python3.9[145497]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012109.8600543-561-59573082276561/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:11.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:15:11 np0005590810 python3.9[145650]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:12 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:12 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:12 np0005590810 python3.9[145775]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012111.2328522-606-164116811901568/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:12 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:12.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:15:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:13.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:15:13 np0005590810 python3.9[145928]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:15:14 np0005590810 python3.9[146081]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:15:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:14 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:14 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:14 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:14.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:14 np0005590810 python3.9[146237]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:15.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:15:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:15] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:15:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:15] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:15:15 np0005590810 python3.9[146390]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:15:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:15 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:15:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:15 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:15:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:16 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:16 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:16 np0005590810 python3.9[146568]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:15:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:16 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:16.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:16.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:15:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:16.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:15:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:16.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:15:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:15:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:17.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:15:17 np0005590810 python3.9[146723]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:15:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 21 11:15:18 np0005590810 python3.9[146879]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:15:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980044e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:18.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:19.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 21 11:15:19 np0005590810 python3.9[147030]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:15:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278000ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:20 np0005590810 python3.9[147185]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:15:20 np0005590810 ovs-vsctl[147186]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 21 11:15:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:20.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:21.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:15:21 np0005590810 python3.9[147340]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:15:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:21 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:15:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980044e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278000ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:22 np0005590810 python3.9[147495]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:15:22 np0005590810 ovs-vsctl[147496]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 21 11:15:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:22.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:23 np0005590810 python3.9[147647]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:15:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:23.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:15:23 np0005590810 python3.9[147802]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:15:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2980044e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:15:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:15:24 np0005590810 python3.9[147954]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278001e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:15:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:15:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:24.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:25 np0005590810 python3.9[148033]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:15:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:25.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 21 11:15:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161525 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:15:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:25] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:15:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:25] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:15:25 np0005590810 python3.9[148186]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:26 np0005590810 python3.9[148264]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:15:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:26.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:26.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:15:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:27.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:27 np0005590810 python3.9[148417]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 21 11:15:27 np0005590810 python3.9[148570]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:15:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278001e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:28 np0005590810 python3.9[148648]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:15:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:28.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:15:28 np0005590810 python3.9[148801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:29.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 21 11:15:29 np0005590810 python3.9[148880]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278001e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:30 np0005590810 python3.9[149032]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:15:30 np0005590810 systemd[1]: Reloading.
Jan 21 11:15:30 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:15:30 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:15:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161531 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:15:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:31.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 21 11:15:31 np0005590810 python3.9[149224]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:32 np0005590810 python3.9[149302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:32.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:33.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:33 np0005590810 python3.9[149455]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Jan 21 11:15:33 np0005590810 python3.9[149534]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:34 np0005590810 python3.9[149686]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:15:34 np0005590810 systemd[1]: Reloading.
Jan 21 11:15:34 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:15:34 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:15:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:34.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:35 np0005590810 systemd[1]: Starting Create netns directory...
Jan 21 11:15:35 np0005590810 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 11:15:35 np0005590810 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 11:15:35 np0005590810 systemd[1]: Finished Create netns directory.
Jan 21 11:15:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:35.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Jan 21 11:15:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:35] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:15:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:35] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 21 11:15:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:36 np0005590810 python3.9[149880]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:15:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:36.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:36.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:15:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:37.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:37 np0005590810 python3.9[150058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:15:37 np0005590810 python3.9[150182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012136.6604955-1359-184892416017963/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:15:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:38 np0005590810 python3.9[150334]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:38.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:39.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:15:39
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', 'images', 'vms']
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:15:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:15:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:15:39 np0005590810 python3.9[150488]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:15:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:15:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:40 np0005590810 python3.9[150640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:15:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:40.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:40 np0005590810 python3.9[150764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012139.953869-1458-71972472287739/.source.json _original_basename=.rgl_gltr follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:15:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:41.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:15:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:15:41 np0005590810 python3.9[150915]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:42.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:43.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:15:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:44 np0005590810 python3.9[151340]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 21 11:15:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:44.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:45 np0005590810 python3.9[151493]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 11:15:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:15:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:45] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 21 11:15:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:45] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 21 11:15:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:46 np0005590810 python3[151647]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 11:15:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c0026d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:46.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:47.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:15:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:47.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:15:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:48.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:15:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c0026d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:50.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:51.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:15:51 np0005590810 podman[151659]: 2026-01-21 16:15:51.446784481 +0000 UTC m=+4.942335243 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 11:15:51 np0005590810 podman[151785]: 2026-01-21 16:15:51.591703306 +0000 UTC m=+0.054596003 container create 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:15:51 np0005590810 podman[151785]: 2026-01-21 16:15:51.561443354 +0000 UTC m=+0.024336051 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 11:15:51 np0005590810 python3[151647]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 11:15:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:52.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:15:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:53.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:15:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:15:53 np0005590810 python3.9[151977]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:15:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:15:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:15:54 np0005590810 python3.9[152131]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:54.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:15:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:55.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:15:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:15:55 np0005590810 python3.9[152209]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:15:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:55] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:15:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:15:55] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:15:56 np0005590810 python3.9[152360]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769012155.5214384-1692-186911841817521/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:15:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:56 np0005590810 python3.9[152436]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 11:15:56 np0005590810 systemd[1]: Reloading.
Jan 21 11:15:56 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:15:56 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:15:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:15:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:56.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:15:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:57.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:15:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:15:57.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:15:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:57.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:15:57 np0005590810 python3.9[152575]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:15:57 np0005590810 systemd[1]: Reloading.
Jan 21 11:15:57 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:15:57 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:15:57 np0005590810 systemd[1]: Starting ovn_controller container...
Jan 21 11:15:58 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:15:58 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a683fabfcd4c7d433b89f7a0140bc31762fe1039d4a432b1858d05efc109335f/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 21 11:15:58 np0005590810 systemd[1]: Started /usr/bin/podman healthcheck run 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b.
Jan 21 11:15:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:58 np0005590810 podman[152617]: 2026-01-21 16:15:58.269671448 +0000 UTC m=+0.289575315 container init 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + sudo -E kolla_set_configs
Jan 21 11:15:58 np0005590810 podman[152617]: 2026-01-21 16:15:58.296663507 +0000 UTC m=+0.316567354 container start 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 21 11:15:58 np0005590810 edpm-start-podman-container[152617]: ovn_controller
Jan 21 11:15:58 np0005590810 systemd[1]: Created slice User Slice of UID 0.
Jan 21 11:15:58 np0005590810 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 21 11:15:58 np0005590810 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 21 11:15:58 np0005590810 systemd[1]: Starting User Manager for UID 0...
Jan 21 11:15:58 np0005590810 edpm-start-podman-container[152616]: Creating additional drop-in dependency for "ovn_controller" (9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b)
Jan 21 11:15:58 np0005590810 podman[152639]: 2026-01-21 16:15:58.366257173 +0000 UTC m=+0.060408694 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:15:58 np0005590810 systemd[1]: 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b-32622bdfe12b2471.service: Main process exited, code=exited, status=1/FAILURE
Jan 21 11:15:58 np0005590810 systemd[1]: 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b-32622bdfe12b2471.service: Failed with result 'exit-code'.
Jan 21 11:15:58 np0005590810 systemd[1]: Reloading.
Jan 21 11:15:58 np0005590810 systemd[152667]: Queued start job for default target Main User Target.
Jan 21 11:15:58 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:15:58 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:15:58 np0005590810 systemd[152667]: Created slice User Application Slice.
Jan 21 11:15:58 np0005590810 systemd[152667]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 21 11:15:58 np0005590810 systemd[152667]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 11:15:58 np0005590810 systemd[152667]: Reached target Paths.
Jan 21 11:15:58 np0005590810 systemd[152667]: Reached target Timers.
Jan 21 11:15:58 np0005590810 systemd[152667]: Starting D-Bus User Message Bus Socket...
Jan 21 11:15:58 np0005590810 systemd[152667]: Starting Create User's Volatile Files and Directories...
Jan 21 11:15:58 np0005590810 systemd[152667]: Finished Create User's Volatile Files and Directories.
Jan 21 11:15:58 np0005590810 systemd[152667]: Listening on D-Bus User Message Bus Socket.
Jan 21 11:15:58 np0005590810 systemd[152667]: Reached target Sockets.
Jan 21 11:15:58 np0005590810 systemd[152667]: Reached target Basic System.
Jan 21 11:15:58 np0005590810 systemd[152667]: Reached target Main User Target.
Jan 21 11:15:58 np0005590810 systemd[152667]: Startup finished in 159ms.
Jan 21 11:15:58 np0005590810 systemd[1]: Started User Manager for UID 0.
Jan 21 11:15:58 np0005590810 systemd[1]: Started ovn_controller container.
Jan 21 11:15:58 np0005590810 systemd[1]: Started Session c1 of User root.
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: INFO:__main__:Validating config file
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: INFO:__main__:Writing out command to execute
Jan 21 11:15:58 np0005590810 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: ++ cat /run_command
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + ARGS=
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + sudo kolla_copy_cacerts
Jan 21 11:15:58 np0005590810 systemd[1]: Started Session c2 of User root.
Jan 21 11:15:58 np0005590810 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + [[ ! -n '' ]]
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + . kolla_extend_start
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + umask 0022
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 21 11:15:58 np0005590810 NetworkManager[48894]: <info>  [1769012158.8800] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 21 11:15:58 np0005590810 NetworkManager[48894]: <info>  [1769012158.8811] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 11:15:58 np0005590810 NetworkManager[48894]: <warn>  [1769012158.8814] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 11:15:58 np0005590810 NetworkManager[48894]: <info>  [1769012158.8823] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 21 11:15:58 np0005590810 NetworkManager[48894]: <info>  [1769012158.8830] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 21 11:15:58 np0005590810 NetworkManager[48894]: <info>  [1769012158.8834] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 21 11:15:58 np0005590810 kernel: br-int: entered promiscuous mode
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 21 11:15:58 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:58Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 21 11:15:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:15:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:15:58 np0005590810 systemd-udevd[152768]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 11:15:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:15:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:15:58.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:15:59 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:59Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 11:15:59 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:59Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 11:15:59 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:59Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 11:15:59 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:59Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 11:15:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:15:59 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:59Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 11:15:59 np0005590810 ovn_controller[152632]: 2026-01-21T16:15:59Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 11:15:59 np0005590810 NetworkManager[48894]: <info>  [1769012159.1091] manager: (ovn-8315df-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 21 11:15:59 np0005590810 kernel: genev_sys_6081: entered promiscuous mode
Jan 21 11:15:59 np0005590810 systemd-udevd[152770]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 11:15:59 np0005590810 NetworkManager[48894]: <info>  [1769012159.1266] device (genev_sys_6081): carrier: link connected
Jan 21 11:15:59 np0005590810 NetworkManager[48894]: <info>  [1769012159.1269] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 21 11:15:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:15:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:15:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:15:59.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:15:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:15:59 np0005590810 NetworkManager[48894]: <info>  [1769012159.5133] manager: (ovn-6b7ab1-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 21 11:16:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:16:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:16:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:16:00 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:16:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:16:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:00.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:01 np0005590810 NetworkManager[48894]: <info>  [1769012161.0335] manager: (ovn-ff0ffa-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 21 11:16:01 np0005590810 python3.9[152983]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 21 11:16:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:01.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:16:01 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:16:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa274002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:02 np0005590810 python3.9[153187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:02 np0005590810 podman[153231]: 2026-01-21 16:16:02.277849914 +0000 UTC m=+0.046563073 container create ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:16:02 np0005590810 systemd[1]: Started libpod-conmon-ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4.scope.
Jan 21 11:16:02 np0005590810 podman[153231]: 2026-01-21 16:16:02.25759068 +0000 UTC m=+0.026303869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:16:02 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:16:02 np0005590810 podman[153231]: 2026-01-21 16:16:02.376194827 +0000 UTC m=+0.144908006 container init ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:16:02 np0005590810 podman[153231]: 2026-01-21 16:16:02.385518784 +0000 UTC m=+0.154231943 container start ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 11:16:02 np0005590810 podman[153231]: 2026-01-21 16:16:02.389750199 +0000 UTC m=+0.158463358 container attach ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:16:02 np0005590810 eloquent_solomon[153270]: 167 167
Jan 21 11:16:02 np0005590810 systemd[1]: libpod-ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4.scope: Deactivated successfully.
Jan 21 11:16:02 np0005590810 podman[153231]: 2026-01-21 16:16:02.394883592 +0000 UTC m=+0.163596781 container died ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 21 11:16:02 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a7578d37e2e879d670a6f48bc6161f609a070832e16e9a17070278745b8a0ff5-merged.mount: Deactivated successfully.
Jan 21 11:16:02 np0005590810 podman[153231]: 2026-01-21 16:16:02.449565524 +0000 UTC m=+0.218278683 container remove ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_solomon, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:16:02 np0005590810 systemd[1]: libpod-conmon-ef9d83c0f960ba5821f43b5aa1b67fef110a2d188ba2db96290c536db46074e4.scope: Deactivated successfully.
Jan 21 11:16:02 np0005590810 podman[153366]: 2026-01-21 16:16:02.61142916 +0000 UTC m=+0.029091877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:16:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:16:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:02.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:16:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:03.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:03 np0005590810 podman[153366]: 2026-01-21 16:16:03.15648362 +0000 UTC m=+0.574146327 container create f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:16:03 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:16:03 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:16:03 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:16:03 np0005590810 systemd[1]: Started libpod-conmon-f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7.scope.
Jan 21 11:16:03 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:16:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e4bc8dbe9e2a792d9b455462687f8d499efdf018a4c5e8ef170071d815a4bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e4bc8dbe9e2a792d9b455462687f8d499efdf018a4c5e8ef170071d815a4bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e4bc8dbe9e2a792d9b455462687f8d499efdf018a4c5e8ef170071d815a4bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e4bc8dbe9e2a792d9b455462687f8d499efdf018a4c5e8ef170071d815a4bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:03 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e4bc8dbe9e2a792d9b455462687f8d499efdf018a4c5e8ef170071d815a4bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:03 np0005590810 podman[153366]: 2026-01-21 16:16:03.269267563 +0000 UTC m=+0.686930280 container init f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:16:03 np0005590810 podman[153366]: 2026-01-21 16:16:03.277205626 +0000 UTC m=+0.694868333 container start f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:16:03 np0005590810 podman[153366]: 2026-01-21 16:16:03.28172491 +0000 UTC m=+0.699387637 container attach f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:16:03 np0005590810 python3.9[153406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012161.7295334-1827-240064069781186/.source.yaml _original_basename=.ha3xhe3w follow=False checksum=516bb37d96ea760dfe7c4ffffa2c950e78c2ee95 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:03 np0005590810 dreamy_chaplygin[153413]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:16:03 np0005590810 dreamy_chaplygin[153413]: --> All data devices are unavailable
Jan 21 11:16:03 np0005590810 systemd[1]: libpod-f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7.scope: Deactivated successfully.
Jan 21 11:16:03 np0005590810 podman[153366]: 2026-01-21 16:16:03.629762265 +0000 UTC m=+1.047424962 container died f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:16:03 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f3e4bc8dbe9e2a792d9b455462687f8d499efdf018a4c5e8ef170071d815a4bd-merged.mount: Deactivated successfully.
Jan 21 11:16:03 np0005590810 podman[153366]: 2026-01-21 16:16:03.681663509 +0000 UTC m=+1.099326206 container remove f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:16:03 np0005590810 systemd[1]: libpod-conmon-f8b87f39455d9a5ef528f86c20d3786b5ec810bca6b9ca42bcfc9d0ddb901eb7.scope: Deactivated successfully.
Jan 21 11:16:04 np0005590810 python3.9[153638]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:16:04 np0005590810 ovs-vsctl[153654]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 21 11:16:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:04 np0005590810 podman[153706]: 2026-01-21 16:16:04.263217722 +0000 UTC m=+0.041185153 container create 203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:16:04 np0005590810 systemd[1]: Started libpod-conmon-203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166.scope.
Jan 21 11:16:04 np0005590810 podman[153706]: 2026-01-21 16:16:04.246557521 +0000 UTC m=+0.024524982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:16:04 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:16:04 np0005590810 podman[153706]: 2026-01-21 16:16:04.360990386 +0000 UTC m=+0.138957837 container init 203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:16:04 np0005590810 podman[153706]: 2026-01-21 16:16:04.370562871 +0000 UTC m=+0.148530302 container start 203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 21 11:16:04 np0005590810 podman[153706]: 2026-01-21 16:16:04.374741294 +0000 UTC m=+0.152708755 container attach 203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:16:04 np0005590810 angry_keller[153723]: 167 167
Jan 21 11:16:04 np0005590810 systemd[1]: libpod-203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166.scope: Deactivated successfully.
Jan 21 11:16:04 np0005590810 podman[153706]: 2026-01-21 16:16:04.376881473 +0000 UTC m=+0.154848904 container died 203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:16:04 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f864e6cead73578cc75d234af0ec0389137562e68f122c5e72293257d0f121ab-merged.mount: Deactivated successfully.
Jan 21 11:16:04 np0005590810 podman[153706]: 2026-01-21 16:16:04.423806147 +0000 UTC m=+0.201773598 container remove 203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 11:16:04 np0005590810 systemd[1]: libpod-conmon-203c0cf351dcdaa17951ee3698eecb1917296e2fa2d024e7e27fa7a5bc9de166.scope: Deactivated successfully.
Jan 21 11:16:04 np0005590810 podman[153824]: 2026-01-21 16:16:04.598748909 +0000 UTC m=+0.054282219 container create 109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:16:04 np0005590810 systemd[1]: Started libpod-conmon-109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c.scope.
Jan 21 11:16:04 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:16:04 np0005590810 podman[153824]: 2026-01-21 16:16:04.575659044 +0000 UTC m=+0.031192374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:16:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c02de1da75f22b877da4d0d59ade2898bd80417677bb57d8439d6479ef5a5fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c02de1da75f22b877da4d0d59ade2898bd80417677bb57d8439d6479ef5a5fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c02de1da75f22b877da4d0d59ade2898bd80417677bb57d8439d6479ef5a5fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:04 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c02de1da75f22b877da4d0d59ade2898bd80417677bb57d8439d6479ef5a5fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:04 np0005590810 podman[153824]: 2026-01-21 16:16:04.687040702 +0000 UTC m=+0.142574022 container init 109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yonath, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:16:04 np0005590810 podman[153824]: 2026-01-21 16:16:04.695601795 +0000 UTC m=+0.151135105 container start 109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:16:04 np0005590810 podman[153824]: 2026-01-21 16:16:04.699643333 +0000 UTC m=+0.155176673 container attach 109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:16:04 np0005590810 python3.9[153895]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:16:04 np0005590810 ovs-vsctl[153900]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 21 11:16:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:04.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]: {
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:    "0": [
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:        {
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "devices": [
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "/dev/loop3"
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            ],
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "lv_name": "ceph_lv0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "lv_size": "21470642176",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "name": "ceph_lv0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "tags": {
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.cluster_name": "ceph",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.crush_device_class": "",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.encrypted": "0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.osd_id": "0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.type": "block",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.vdo": "0",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:                "ceph.with_tpm": "0"
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            },
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "type": "block",
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:            "vg_name": "ceph_vg0"
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:        }
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]:    ]
Jan 21 11:16:05 np0005590810 ecstatic_yonath[153891]: }
Jan 21 11:16:05 np0005590810 systemd[1]: libpod-109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c.scope: Deactivated successfully.
Jan 21 11:16:05 np0005590810 podman[153824]: 2026-01-21 16:16:05.075728172 +0000 UTC m=+0.531261492 container died 109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yonath, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:16:05 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c02de1da75f22b877da4d0d59ade2898bd80417677bb57d8439d6479ef5a5fbb-merged.mount: Deactivated successfully.
Jan 21 11:16:05 np0005590810 podman[153824]: 2026-01-21 16:16:05.133179981 +0000 UTC m=+0.588713291 container remove 109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:16:05 np0005590810 systemd[1]: libpod-conmon-109fe5eb181433a1649e68311e8b74cbf369e31647b261fa37730b5f29cb105c.scope: Deactivated successfully.
Jan 21 11:16:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:05.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:05] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:16:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:05] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:16:05 np0005590810 podman[154159]: 2026-01-21 16:16:05.780032335 +0000 UTC m=+0.044817249 container create 4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:16:05 np0005590810 systemd[1]: Started libpod-conmon-4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8.scope.
Jan 21 11:16:05 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:16:05 np0005590810 podman[154159]: 2026-01-21 16:16:05.762099134 +0000 UTC m=+0.026884038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:16:05 np0005590810 podman[154159]: 2026-01-21 16:16:05.874527725 +0000 UTC m=+0.139312629 container init 4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:16:05 np0005590810 podman[154159]: 2026-01-21 16:16:05.88282379 +0000 UTC m=+0.147608664 container start 4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:16:05 np0005590810 podman[154159]: 2026-01-21 16:16:05.886416073 +0000 UTC m=+0.151200957 container attach 4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cray, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:16:05 np0005590810 stupefied_cray[154178]: 167 167
Jan 21 11:16:05 np0005590810 systemd[1]: libpod-4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8.scope: Deactivated successfully.
Jan 21 11:16:05 np0005590810 podman[154159]: 2026-01-21 16:16:05.892913661 +0000 UTC m=+0.157698565 container died 4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:16:05 np0005590810 systemd[1]: var-lib-containers-storage-overlay-882bd289fe13db67ad5cbe86c0c21729d40607d7d82bcfda334147ce048b94cb-merged.mount: Deactivated successfully.
Jan 21 11:16:05 np0005590810 podman[154159]: 2026-01-21 16:16:05.942334744 +0000 UTC m=+0.207119628 container remove 4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cray, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:16:05 np0005590810 python3.9[154168]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:16:05 np0005590810 systemd[1]: libpod-conmon-4e408587f958f688c4df5b0bd1ec54e581490d15700312b169900bd5a7e9f3b8.scope: Deactivated successfully.
Jan 21 11:16:05 np0005590810 ovs-vsctl[154195]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 21 11:16:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:06 np0005590810 podman[154221]: 2026-01-21 16:16:06.148693678 +0000 UTC m=+0.060613152 container create 3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:16:06 np0005590810 systemd[1]: Started libpod-conmon-3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364.scope.
Jan 21 11:16:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:06 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:16:06 np0005590810 podman[154221]: 2026-01-21 16:16:06.127368128 +0000 UTC m=+0.039287632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:16:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79e790dfff76203552705cffb8a6f9d2776e7dfb38a86fbe6c74a39cd80565b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79e790dfff76203552705cffb8a6f9d2776e7dfb38a86fbe6c74a39cd80565b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79e790dfff76203552705cffb8a6f9d2776e7dfb38a86fbe6c74a39cd80565b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79e790dfff76203552705cffb8a6f9d2776e7dfb38a86fbe6c74a39cd80565b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:16:06 np0005590810 podman[154221]: 2026-01-21 16:16:06.242627319 +0000 UTC m=+0.154546813 container init 3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:16:06 np0005590810 podman[154221]: 2026-01-21 16:16:06.250243252 +0000 UTC m=+0.162162736 container start 3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:16:06 np0005590810 podman[154221]: 2026-01-21 16:16:06.253255458 +0000 UTC m=+0.165174942 container attach 3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:16:06 np0005590810 systemd[1]: session-50.scope: Deactivated successfully.
Jan 21 11:16:06 np0005590810 systemd[1]: session-50.scope: Consumed 1min 59ms CPU time.
Jan 21 11:16:06 np0005590810 systemd-logind[795]: Session 50 logged out. Waiting for processes to exit.
Jan 21 11:16:06 np0005590810 systemd-logind[795]: Removed session 50.
Jan 21 11:16:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:06 np0005590810 lvm[154316]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:16:06 np0005590810 lvm[154316]: VG ceph_vg0 finished
Jan 21 11:16:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:06.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:07.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:16:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:07.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:16:07 np0005590810 brave_euclid[154241]: {}
Jan 21 11:16:07 np0005590810 systemd[1]: libpod-3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364.scope: Deactivated successfully.
Jan 21 11:16:07 np0005590810 systemd[1]: libpod-3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364.scope: Consumed 1.206s CPU time.
Jan 21 11:16:07 np0005590810 podman[154221]: 2026-01-21 16:16:07.06177327 +0000 UTC m=+0.973692764 container died 3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:16:07 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a79e790dfff76203552705cffb8a6f9d2776e7dfb38a86fbe6c74a39cd80565b-merged.mount: Deactivated successfully.
Jan 21 11:16:07 np0005590810 podman[154221]: 2026-01-21 16:16:07.107013571 +0000 UTC m=+1.018933055 container remove 3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:16:07 np0005590810 systemd[1]: libpod-conmon-3aa73ed17e20dc30ae8da7aa758c82e8691ae1f268b430980a2e2aae80929364.scope: Deactivated successfully.
Jan 21 11:16:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:16:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:07.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:16:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:16:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:16:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:16:07 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:16:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:08.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:09 np0005590810 systemd[1]: Stopping User Manager for UID 0...
Jan 21 11:16:09 np0005590810 systemd[152667]: Activating special unit Exit the Session...
Jan 21 11:16:09 np0005590810 systemd[152667]: Stopped target Main User Target.
Jan 21 11:16:09 np0005590810 systemd[152667]: Stopped target Basic System.
Jan 21 11:16:09 np0005590810 systemd[152667]: Stopped target Paths.
Jan 21 11:16:09 np0005590810 systemd[152667]: Stopped target Sockets.
Jan 21 11:16:09 np0005590810 systemd[152667]: Stopped target Timers.
Jan 21 11:16:09 np0005590810 systemd[152667]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 11:16:09 np0005590810 systemd[152667]: Closed D-Bus User Message Bus Socket.
Jan 21 11:16:09 np0005590810 systemd[152667]: Stopped Create User's Volatile Files and Directories.
Jan 21 11:16:09 np0005590810 systemd[152667]: Removed slice User Application Slice.
Jan 21 11:16:09 np0005590810 systemd[152667]: Reached target Shutdown.
Jan 21 11:16:09 np0005590810 systemd[152667]: Finished Exit the Session.
Jan 21 11:16:09 np0005590810 systemd[152667]: Reached target Exit the Session.
Jan 21 11:16:09 np0005590810 systemd[1]: user@0.service: Deactivated successfully.
Jan 21 11:16:09 np0005590810 systemd[1]: Stopped User Manager for UID 0.
Jan 21 11:16:09 np0005590810 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 21 11:16:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:09.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:09 np0005590810 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 21 11:16:09 np0005590810 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 21 11:16:09 np0005590810 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 21 11:16:09 np0005590810 systemd[1]: Removed slice User Slice of UID 0.
Jan 21 11:16:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:16:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:16:09 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:16:09 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:16:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:16:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:16:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:16:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:16:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:16:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:16:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:10 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:10 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:10 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:10.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:11.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:16:12 np0005590810 systemd-logind[795]: New session 52 of user zuul.
Jan 21 11:16:12 np0005590810 systemd[1]: Started Session 52 of User zuul.
Jan 21 11:16:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:12 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:12 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294002ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:12 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:12.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:13 np0005590810 python3.9[154517]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:16:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:13.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:14 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:14 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:14 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294002ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:14.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:15 np0005590810 python3.9[154676]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:15] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:16:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:15] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:16:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:16 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:16 np0005590810 python3.9[154828]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:16 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:16 np0005590810 python3.9[155005]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:16 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:16.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:17.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:16:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:17.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:16:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:17 np0005590810 python3.9[155160]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:18 np0005590810 python3.9[155312]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294002ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:18 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:18.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:19 np0005590810 python3.9[155463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:16:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161619 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:16:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:19.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:19 np0005590810 python3.9[155616]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 21 11:16:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa294002ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:20 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:20.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:21.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:21 np0005590810 python3.9[155768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:16:21 np0005590810 python3.9[155889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012180.7725203-213-58508733206634/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:22 np0005590810 python3.9[156039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:22 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:22.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:23 np0005590810 python3.9[156161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012182.1677296-258-88065421564781/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:16:24 np0005590810 python3.9[156314]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:16:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:16:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:16:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:24 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:24.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:25 np0005590810 python3.9[156399]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:16:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:16:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:25] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 21 11:16:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:25] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 21 11:16:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c003750 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:26 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:26.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:27.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:16:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:27.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:16:27 np0005590810 python3.9[156555]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 11:16:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:28 np0005590810 python3.9[156708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:16:28 np0005590810 ovn_controller[152632]: 2026-01-21T16:16:28Z|00025|memory|INFO|15872 kB peak resident set size after 29.8 seconds
Jan 21 11:16:28 np0005590810 ovn_controller[152632]: 2026-01-21T16:16:28Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Jan 21 11:16:28 np0005590810 podman[156805]: 2026-01-21 16:16:28.73751266 +0000 UTC m=+0.105566733 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 21 11:16:28 np0005590810 python3.9[156843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012187.8864956-369-272073307601324/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:28 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2900014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:28.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:29.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:16:29 np0005590810 python3.9[157010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:30 np0005590810 python3.9[157131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012189.0526066-369-18546417204858/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:30 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:30.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:31.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:31 np0005590810 python3.9[157283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:16:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:31 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:16:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:31 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:16:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:31 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:16:31 np0005590810 python3.9[157404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012190.8665552-501-209382433911368/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2900014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:32 np0005590810 python3.9[157554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:32 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:32.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:33 np0005590810 python3.9[157676]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012192.0052614-501-277050828773437/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:33.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:16:33 np0005590810 python3.9[157827]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:16:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2900014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:16:34 np0005590810 python3.9[157981]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:34 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:16:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:34.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:16:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:35.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:16:35 np0005590810 python3.9[158135]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:35] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 21 11:16:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:35] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 21 11:16:36 np0005590810 python3.9[158213]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:36 np0005590810 python3.9[158365]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:36 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2900014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:16:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:36.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:16:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:37.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:16:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:37.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:16:37 np0005590810 python3.9[158469]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:37.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:16:37 np0005590810 python3.9[158622]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:38 np0005590810 python3.9[158774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:38 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:38.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:16:39
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'volumes', 'vms', 'images', '.nfs', 'default.rgw.log', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr']
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:16:39 np0005590810 python3.9[158853]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:16:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:16:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:16:39 np0005590810 python3.9[159006]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2900014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:40 np0005590810 python3.9[159084]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:40 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:16:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:41.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:16:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161641 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:16:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:41.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:41 np0005590810 python3.9[159237]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:16:41 np0005590810 systemd[1]: Reloading.
Jan 21 11:16:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:16:41 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:16:41 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:16:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:42 np0005590810 python3.9[159426]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:42 np0005590810 python3.9[159504]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:42 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:43.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:43.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 21 11:16:43 np0005590810 python3.9[159658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:44 np0005590810 python3.9[159736]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:44 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:45.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:45 np0005590810 python3.9[159888]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:16:45 np0005590810 systemd[1]: Reloading.
Jan 21 11:16:45 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:16:45 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:16:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:45 np0005590810 systemd[1]: Starting Create netns directory...
Jan 21 11:16:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Jan 21 11:16:45 np0005590810 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 11:16:45 np0005590810 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 11:16:45 np0005590810 systemd[1]: Finished Create netns directory.
Jan 21 11:16:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:45] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:16:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:45] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 21 11:16:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:46 np0005590810 python3.9[160083]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:46 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:47.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:16:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:47.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:47.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:16:47 np0005590810 python3.9[160237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:48 np0005590810 python3.9[160360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012206.8970912-954-38008449345844/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2900014e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:48 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:49.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:49 np0005590810 python3.9[160513]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:49.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:16:49 np0005590810 python3.9[160666]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:16:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290001680 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:50 np0005590810 python3.9[160818]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:16:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:50 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:51.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:51 np0005590810 python3.9[160942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012210.0243535-1053-225345639091810/.source.json _original_basename=._plpjyi3 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:51.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:16:51 np0005590810 python3.9[161093]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:16:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:52 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa290001680 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:53.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:53.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:16:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:16:54 np0005590810 python3.9[161518]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 21 11:16:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:54 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:16:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:55.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:16:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:55.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:16:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:55] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 21 11:16:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:16:55] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 21 11:16:55 np0005590810 python3.9[161672]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 11:16:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:56 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:16:57.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:16:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:57.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:57 np0005590810 python3[161850]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 11:16:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:16:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:57.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:16:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:16:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2740045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 21 11:16:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 21 11:16:58 np0005590810 radosgw[94128]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 21 11:16:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:16:58 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2940043e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:16:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:16:59.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:16:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:16:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:16:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:16:59.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:16:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:17:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa278003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2900016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:00 np0005590810 podman[161916]: 2026-01-21 16:17:00.742214568 +0000 UTC m=+1.113674627 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:17:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:00 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:01.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:01.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 70 op/s
Jan 21 11:17:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:02 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa2900016e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:03.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:03.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 70 op/s
Jan 21 11:17:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:04 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:05.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 0 B/s wr, 108 op/s
Jan 21 11:17:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:05] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 21 11:17:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:05] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 21 11:17:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa298003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:06 np0005590810 podman[161864]: 2026-01-21 16:17:06.65390299 +0000 UTC m=+9.484410215 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 11:17:06 np0005590810 podman[162030]: 2026-01-21 16:17:06.768953581 +0000 UTC m=+0.023636046 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 11:17:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:06 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa280001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:07.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:17:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:07.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 0 B/s wr, 108 op/s
Jan 21 11:17:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[136399]: 21/01/2026 16:17:08 : epoch 6970fb4e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa27c001d50 fd 48 proxy ignored for local
Jan 21 11:17:08 np0005590810 kernel: ganesha.nfsd[161928]: segfault at 50 ip 00007fa32521132e sp 00007fa29d7f9210 error 4 in libntirpc.so.5.8[7fa3251f6000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 21 11:17:08 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:17:08 np0005590810 systemd[1]: Started Process Core Dump (PID 162108/UID 0).
Jan 21 11:17:08 np0005590810 podman[162030]: 2026-01-21 16:17:08.402490073 +0000 UTC m=+1.657172518 container create 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 21 11:17:08 np0005590810 python3[161850]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 11:17:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:09.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:17:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:09.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:17:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:17:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:17:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:17:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:17:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 0 B/s wr, 108 op/s
Jan 21 11:17:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:17:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:17:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:17:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:17:10 np0005590810 systemd-coredump[162109]: Process 136420 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 62:#012#0  0x00007fa32521132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:17:10 np0005590810 systemd[1]: systemd-coredump@6-162108-0.service: Deactivated successfully.
Jan 21 11:17:10 np0005590810 systemd[1]: systemd-coredump@6-162108-0.service: Consumed 2.149s CPU time.
Jan 21 11:17:10 np0005590810 podman[162184]: 2026-01-21 16:17:10.498678065 +0000 UTC m=+0.029890504 container died 8b47c35bea0f357653679afd5a66bed95cac7d3dc8560753afa8b6935c6b89ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 21 11:17:10 np0005590810 systemd[1]: var-lib-containers-storage-overlay-0e3122259f2a7707b5308e0750198f6798db60e09e083a84af7a48e6325d03cb-merged.mount: Deactivated successfully.
Jan 21 11:17:10 np0005590810 podman[162184]: 2026-01-21 16:17:10.566254368 +0000 UTC m=+0.097466777 container remove 8b47c35bea0f357653679afd5a66bed95cac7d3dc8560753afa8b6935c6b89ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:17:10 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:17:10 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:17:10 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.738s CPU time.
Jan 21 11:17:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:11.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 21 11:17:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161711 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:17:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:17:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:13.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:17:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:13.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:17:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:17:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:15 np0005590810 podman[162344]: 2026-01-21 16:17:15.095324857 +0000 UTC m=+0.041025326 container create 1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_montalcini, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:17:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:17:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:17:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:17:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:17:15 np0005590810 systemd[1]: Started libpod-conmon-1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4.scope.
Jan 21 11:17:15 np0005590810 podman[162344]: 2026-01-21 16:17:15.079032823 +0000 UTC m=+0.024733322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:17:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:17:15 np0005590810 podman[162344]: 2026-01-21 16:17:15.19996949 +0000 UTC m=+0.145669979 container init 1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:17:15 np0005590810 podman[162344]: 2026-01-21 16:17:15.20852456 +0000 UTC m=+0.154225029 container start 1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 21 11:17:15 np0005590810 podman[162344]: 2026-01-21 16:17:15.212808204 +0000 UTC m=+0.158508693 container attach 1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_montalcini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 21 11:17:15 np0005590810 admiring_montalcini[162396]: 167 167
Jan 21 11:17:15 np0005590810 systemd[1]: libpod-1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4.scope: Deactivated successfully.
Jan 21 11:17:15 np0005590810 podman[162344]: 2026-01-21 16:17:15.216553113 +0000 UTC m=+0.162253582 container died 1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 21 11:17:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:15.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:15 np0005590810 systemd[1]: var-lib-containers-storage-overlay-502f6ce4dba64b4cba8cb883b3f0d97210f76a78541c7d1c6fe195852df47d5f-merged.mount: Deactivated successfully.
Jan 21 11:17:15 np0005590810 podman[162344]: 2026-01-21 16:17:15.260258342 +0000 UTC m=+0.205958811 container remove 1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_montalcini, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:17:15 np0005590810 systemd[1]: libpod-conmon-1c898701ba45a7427461e16859913052dd6f7361ff89cbd972a687abb5f3c4b4.scope: Deactivated successfully.
Jan 21 11:17:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Jan 21 11:17:15 np0005590810 podman[162490]: 2026-01-21 16:17:15.442873475 +0000 UTC m=+0.049055190 container create 11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cartwright, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 11:17:15 np0005590810 systemd[1]: Started libpod-conmon-11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd.scope.
Jan 21 11:17:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:17:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb77ead56c6c079095e3797bbaf13fcaf686be2bda9863693e814301b282d49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb77ead56c6c079095e3797bbaf13fcaf686be2bda9863693e814301b282d49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb77ead56c6c079095e3797bbaf13fcaf686be2bda9863693e814301b282d49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb77ead56c6c079095e3797bbaf13fcaf686be2bda9863693e814301b282d49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb77ead56c6c079095e3797bbaf13fcaf686be2bda9863693e814301b282d49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:15 np0005590810 podman[162490]: 2026-01-21 16:17:15.423405371 +0000 UTC m=+0.029587106 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:17:15 np0005590810 podman[162490]: 2026-01-21 16:17:15.522413306 +0000 UTC m=+0.128595041 container init 11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cartwright, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:17:15 np0005590810 python3.9[162484]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:17:15 np0005590810 podman[162490]: 2026-01-21 16:17:15.532763422 +0000 UTC m=+0.138945137 container start 11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:17:15 np0005590810 podman[162490]: 2026-01-21 16:17:15.5365328 +0000 UTC m=+0.142714535 container attach 11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:17:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:15] "GET /metrics HTTP/1.1" 200 48283 "" "Prometheus/2.51.0"
Jan 21 11:17:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:15] "GET /metrics HTTP/1.1" 200 48283 "" "Prometheus/2.51.0"
Jan 21 11:17:15 np0005590810 admiring_cartwright[162507]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:17:15 np0005590810 admiring_cartwright[162507]: --> All data devices are unavailable
Jan 21 11:17:15 np0005590810 systemd[1]: libpod-11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd.scope: Deactivated successfully.
Jan 21 11:17:15 np0005590810 podman[162490]: 2026-01-21 16:17:15.892514485 +0000 UTC m=+0.498696200 container died 11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:17:15 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dcb77ead56c6c079095e3797bbaf13fcaf686be2bda9863693e814301b282d49-merged.mount: Deactivated successfully.
Jan 21 11:17:15 np0005590810 podman[162490]: 2026-01-21 16:17:15.935655166 +0000 UTC m=+0.541836871 container remove 11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:17:15 np0005590810 systemd[1]: libpod-conmon-11927087e375a22ed91b0aa618ad56ff8031e5c261cb5168dd0e0c3cb0ccf7bd.scope: Deactivated successfully.
Jan 21 11:17:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161716 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:17:16 np0005590810 python3.9[162736]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:16 np0005590810 podman[162793]: 2026-01-21 16:17:16.507111301 +0000 UTC m=+0.045412925 container create 82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_neumann, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:17:16 np0005590810 systemd[1]: Started libpod-conmon-82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a.scope.
Jan 21 11:17:16 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:17:16 np0005590810 podman[162793]: 2026-01-21 16:17:16.489136773 +0000 UTC m=+0.027438417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:17:16 np0005590810 podman[162793]: 2026-01-21 16:17:16.593393933 +0000 UTC m=+0.131695577 container init 82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:17:16 np0005590810 podman[162793]: 2026-01-21 16:17:16.601501429 +0000 UTC m=+0.139803053 container start 82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_neumann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:17:16 np0005590810 podman[162793]: 2026-01-21 16:17:16.60467533 +0000 UTC m=+0.142976954 container attach 82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:17:16 np0005590810 competent_neumann[162839]: 167 167
Jan 21 11:17:16 np0005590810 systemd[1]: libpod-82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a.scope: Deactivated successfully.
Jan 21 11:17:16 np0005590810 podman[162793]: 2026-01-21 16:17:16.607918052 +0000 UTC m=+0.146219676 container died 82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_neumann, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:17:16 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d8be334d1fd3cce11f6a5272583edae1e352353bcf3e16dfa75843c4714e3357-merged.mount: Deactivated successfully.
Jan 21 11:17:16 np0005590810 podman[162793]: 2026-01-21 16:17:16.652048755 +0000 UTC m=+0.190350369 container remove 82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:17:16 np0005590810 systemd[1]: libpod-conmon-82d3f9f34ecd427a6c17d03d77fe563982c68978750f58d4b35dddbcd9d1bb4a.scope: Deactivated successfully.
Jan 21 11:17:16 np0005590810 podman[162891]: 2026-01-21 16:17:16.843477466 +0000 UTC m=+0.046529910 container create b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_neumann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:17:16 np0005590810 python3.9[162882]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:17:16 np0005590810 systemd[1]: Started libpod-conmon-b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0.scope.
Jan 21 11:17:16 np0005590810 podman[162891]: 2026-01-21 16:17:16.823109753 +0000 UTC m=+0.026162207 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:17:16 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:17:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45e3e2d80f97578c657583be0d8d413086eb920c8ca213d940b5111bb42d63d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45e3e2d80f97578c657583be0d8d413086eb920c8ca213d940b5111bb42d63d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45e3e2d80f97578c657583be0d8d413086eb920c8ca213d940b5111bb42d63d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:16 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45e3e2d80f97578c657583be0d8d413086eb920c8ca213d940b5111bb42d63d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:16 np0005590810 podman[162891]: 2026-01-21 16:17:16.947944922 +0000 UTC m=+0.150997376 container init b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 11:17:16 np0005590810 podman[162891]: 2026-01-21 16:17:16.958948029 +0000 UTC m=+0.162000463 container start b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:17:16 np0005590810 podman[162891]: 2026-01-21 16:17:16.962563344 +0000 UTC m=+0.165615778 container attach b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 21 11:17:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:17.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:17:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:17.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]: {
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:    "0": [
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:        {
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "devices": [
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "/dev/loop3"
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            ],
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "lv_name": "ceph_lv0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "lv_size": "21470642176",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "name": "ceph_lv0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "tags": {
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.cluster_name": "ceph",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.crush_device_class": "",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.encrypted": "0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.osd_id": "0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.type": "block",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.vdo": "0",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:                "ceph.with_tpm": "0"
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            },
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "type": "block",
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:            "vg_name": "ceph_vg0"
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:        }
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]:    ]
Jan 21 11:17:17 np0005590810 nifty_neumann[162933]: }
Jan 21 11:17:17 np0005590810 systemd[1]: libpod-b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0.scope: Deactivated successfully.
Jan 21 11:17:17 np0005590810 podman[162891]: 2026-01-21 16:17:17.281562271 +0000 UTC m=+0.484614715 container died b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:17:17 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a45e3e2d80f97578c657583be0d8d413086eb920c8ca213d940b5111bb42d63d-merged.mount: Deactivated successfully.
Jan 21 11:17:17 np0005590810 podman[162891]: 2026-01-21 16:17:17.329291427 +0000 UTC m=+0.532343861 container remove b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:17:17 np0005590810 systemd[1]: libpod-conmon-b806377c9b28f25636dd9ef905bb5ddbefc9b1607ad076bd1f1f5f4caccd48d0.scope: Deactivated successfully.
Jan 21 11:17:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 21 11:17:17 np0005590810 python3.9[163125]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769012236.9343014-1287-24168942769865/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:17 np0005590810 podman[163219]: 2026-01-21 16:17:17.866203371 +0000 UTC m=+0.049521664 container create 4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:17:17 np0005590810 systemd[1]: Started libpod-conmon-4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507.scope.
Jan 21 11:17:17 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:17:17 np0005590810 podman[163219]: 2026-01-21 16:17:17.844975951 +0000 UTC m=+0.028294264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:17:17 np0005590810 podman[163219]: 2026-01-21 16:17:17.94886212 +0000 UTC m=+0.132180433 container init 4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclaren, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:17:17 np0005590810 podman[163219]: 2026-01-21 16:17:17.957060219 +0000 UTC m=+0.140378512 container start 4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:17:17 np0005590810 podman[163219]: 2026-01-21 16:17:17.961570351 +0000 UTC m=+0.144888644 container attach 4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclaren, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:17:17 np0005590810 stoic_mclaren[163259]: 167 167
Jan 21 11:17:17 np0005590810 systemd[1]: libpod-4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507.scope: Deactivated successfully.
Jan 21 11:17:17 np0005590810 podman[163219]: 2026-01-21 16:17:17.966489536 +0000 UTC m=+0.149807859 container died 4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:17:17 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c67282d811fc1fc170b252f8eb6c88e5fb206211c634d2a3afbe649ed197804e-merged.mount: Deactivated successfully.
Jan 21 11:17:18 np0005590810 podman[163219]: 2026-01-21 16:17:18.032186649 +0000 UTC m=+0.215504942 container remove 4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:17:18 np0005590810 systemd[1]: libpod-conmon-4bcb1973a1bec28d21acd95b33b5d7c941adcce2b8b494a03c7b69d3c2d1c507.scope: Deactivated successfully.
Jan 21 11:17:18 np0005590810 podman[163312]: 2026-01-21 16:17:18.181818272 +0000 UTC m=+0.040071906 container create 77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 21 11:17:18 np0005590810 systemd[1]: Started libpod-conmon-77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf.scope.
Jan 21 11:17:18 np0005590810 podman[163312]: 2026-01-21 16:17:18.166178628 +0000 UTC m=+0.024432282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:17:18 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:17:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b687c458628ce8436b1804c984e1028479b0f5b2e0660663fa5fef9e8547e883/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b687c458628ce8436b1804c984e1028479b0f5b2e0660663fa5fef9e8547e883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b687c458628ce8436b1804c984e1028479b0f5b2e0660663fa5fef9e8547e883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b687c458628ce8436b1804c984e1028479b0f5b2e0660663fa5fef9e8547e883/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:18 np0005590810 podman[163312]: 2026-01-21 16:17:18.28413565 +0000 UTC m=+0.142389314 container init 77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wilson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:17:18 np0005590810 podman[163312]: 2026-01-21 16:17:18.293275239 +0000 UTC m=+0.151528873 container start 77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:17:18 np0005590810 podman[163312]: 2026-01-21 16:17:18.297728939 +0000 UTC m=+0.155982603 container attach 77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:17:18 np0005590810 python3.9[163306]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 11:17:18 np0005590810 systemd[1]: Reloading.
Jan 21 11:17:18 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:17:18 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:17:18 np0005590810 lvm[163486]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:17:18 np0005590810 lvm[163486]: VG ceph_vg0 finished
Jan 21 11:17:18 np0005590810 gracious_wilson[163329]: {}
Jan 21 11:17:18 np0005590810 systemd[1]: libpod-77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf.scope: Deactivated successfully.
Jan 21 11:17:18 np0005590810 systemd[1]: libpod-77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf.scope: Consumed 1.062s CPU time.
Jan 21 11:17:18 np0005590810 podman[163312]: 2026-01-21 16:17:18.995525201 +0000 UTC m=+0.853778835 container died 77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:17:19 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b687c458628ce8436b1804c984e1028479b0f5b2e0660663fa5fef9e8547e883-merged.mount: Deactivated successfully.
Jan 21 11:17:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:19.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:19 np0005590810 podman[163312]: 2026-01-21 16:17:19.05288184 +0000 UTC m=+0.911135474 container remove 77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wilson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:17:19 np0005590810 systemd[1]: libpod-conmon-77a3930fff270a36fc5295280f5c9decdca48ca23f0e7996adf6c6b145322bdf.scope: Deactivated successfully.
Jan 21 11:17:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:17:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:19.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:19 np0005590810 python3.9[163517]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:17:19 np0005590810 systemd[1]: Reloading.
Jan 21 11:17:19 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:17:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 21 11:17:19 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:17:19 np0005590810 systemd[1]: Starting ovn_metadata_agent container...
Jan 21 11:17:19 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:17:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be7dc3aac93b46dddac4dbf2e8763c495289d04ce468e7061aecdb8db945c8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35be7dc3aac93b46dddac4dbf2e8763c495289d04ce468e7061aecdb8db945c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:19 np0005590810 systemd[1]: Started /usr/bin/podman healthcheck run 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef.
Jan 21 11:17:19 np0005590810 podman[163573]: 2026-01-21 16:17:19.820797445 +0000 UTC m=+0.134066212 container init 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + sudo -E kolla_set_configs
Jan 21 11:17:19 np0005590810 podman[163573]: 2026-01-21 16:17:19.84692947 +0000 UTC m=+0.160198247 container start 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 21 11:17:19 np0005590810 edpm-start-podman-container[163573]: ovn_metadata_agent
Jan 21 11:17:19 np0005590810 podman[163595]: 2026-01-21 16:17:19.921765431 +0000 UTC m=+0.059654664 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:17:19 np0005590810 edpm-start-podman-container[163572]: Creating additional drop-in dependency for "ovn_metadata_agent" (2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef)
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Validating config file
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Copying service configuration files
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Writing out command to execute
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 21 11:17:19 np0005590810 systemd[1]: Reloading.
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: ++ cat /run_command
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + CMD=neutron-ovn-metadata-agent
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + ARGS=
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + sudo kolla_copy_cacerts
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + [[ ! -n '' ]]
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + . kolla_extend_start
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: Running command: 'neutron-ovn-metadata-agent'
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + umask 0022
Jan 21 11:17:19 np0005590810 ovn_metadata_agent[163588]: + exec neutron-ovn-metadata-agent
Jan 21 11:17:20 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:17:20 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:17:20 np0005590810 systemd[1]: Started ovn_metadata_agent container.
Jan 21 11:17:20 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 7.
Jan 21 11:17:20 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:17:20 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.738s CPU time.
Jan 21 11:17:20 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:17:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:21 np0005590810 podman[163755]: 2026-01-21 16:17:21.058124713 +0000 UTC m=+0.044132674 container create 70f7aa716e185736961e1bd7d3a67b35aa5899fc3b90af366d180aced2926f5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:17:21 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7f1508d40ef1f005fc53357ac3987cf56feb2c1983fb43ede6c8a84491d44e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:21 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7f1508d40ef1f005fc53357ac3987cf56feb2c1983fb43ede6c8a84491d44e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:21 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7f1508d40ef1f005fc53357ac3987cf56feb2c1983fb43ede6c8a84491d44e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:21 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7f1508d40ef1f005fc53357ac3987cf56feb2c1983fb43ede6c8a84491d44e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:17:21 np0005590810 podman[163755]: 2026-01-21 16:17:21.034687724 +0000 UTC m=+0.020695705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:17:21 np0005590810 podman[163755]: 2026-01-21 16:17:21.130267069 +0000 UTC m=+0.116275050 container init 70f7aa716e185736961e1bd7d3a67b35aa5899fc3b90af366d180aced2926f5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:17:21 np0005590810 podman[163755]: 2026-01-21 16:17:21.135951319 +0000 UTC m=+0.121959280 container start 70f7aa716e185736961e1bd7d3a67b35aa5899fc3b90af366d180aced2926f5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:17:21 np0005590810 bash[163755]: 70f7aa716e185736961e1bd7d3a67b35aa5899fc3b90af366d180aced2926f5e
Jan 21 11:17:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:17:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:17:21 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:17:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:17:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:17:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:17:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:17:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:17:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:17:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:21.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:17:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:17:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 21 11:17:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.953 163593 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.954 163593 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.954 163593 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.955 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.955 163593 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.955 163593 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.955 163593 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.955 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.955 163593 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.955 163593 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.955 163593 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.956 163593 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.957 163593 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.957 163593 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.957 163593 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.957 163593 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.957 163593 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.957 163593 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.957 163593 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.957 163593 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.958 163593 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.958 163593 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.958 163593 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.958 163593 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.958 163593 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.958 163593 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.958 163593 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.958 163593 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.959 163593 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.959 163593 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.959 163593 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.959 163593 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.959 163593 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.959 163593 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.959 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.960 163593 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.961 163593 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.962 163593 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.963 163593 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.963 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.963 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.963 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.963 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.963 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.963 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.963 163593 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.964 163593 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.965 163593 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.965 163593 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.965 163593 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.965 163593 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.965 163593 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.965 163593 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.965 163593 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.965 163593 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.966 163593 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.967 163593 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.967 163593 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.967 163593 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.967 163593 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.967 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.967 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.967 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.967 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.968 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.968 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.968 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.968 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.968 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.968 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.968 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.968 163593 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.969 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.970 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.971 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.972 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.973 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.974 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.975 163593 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.976 163593 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.976 163593 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.976 163593 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.976 163593 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.976 163593 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.976 163593 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.976 163593 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.976 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.977 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.977 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.977 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.977 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.977 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.977 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.977 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.977 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.978 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.978 163593 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.978 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.978 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.978 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.978 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.978 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.979 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.979 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.979 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.979 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.979 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.979 163593 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.979 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.979 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.980 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.980 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.980 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.980 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.980 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.980 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.980 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.980 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.981 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.981 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.981 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.981 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.981 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.981 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.981 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.982 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.982 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.982 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.982 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.982 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.982 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.982 163593 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.983 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.983 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.983 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.983 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.983 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.983 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.983 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.984 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.984 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.984 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.984 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.984 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.984 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.985 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.985 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.985 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.985 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.985 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.985 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.985 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.986 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.986 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.986 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.986 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.986 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.986 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.986 163593 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.987 163593 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.987 163593 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.987 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.987 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.987 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.987 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.987 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.988 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.988 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.988 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.988 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.988 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.988 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.988 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.989 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.989 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.989 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.989 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.989 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.989 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.989 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.990 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.990 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.990 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.990 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.990 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.990 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.990 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.991 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.991 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.991 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.991 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.991 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.991 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.991 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.992 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.992 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.992 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.992 163593 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:21 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:21.992 163593 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.003 163593 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.003 163593 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.003 163593 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.003 163593 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.004 163593 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.017 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name f6e8413f-2ba2-49cb-8bd6-36b8085ce01c (UUID: f6e8413f-2ba2-49cb-8bd6-36b8085ce01c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.037 163593 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.038 163593 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.038 163593 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.038 163593 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.041 163593 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.048 163593 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.053 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'f6e8413f-2ba2-49cb-8bd6-36b8085ce01c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], external_ids={}, name=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, nb_cfg_timestamp=1769012166897, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.054 163593 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f61aaf75f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.054 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.055 163593 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.055 163593 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.055 163593 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.059 163593 DEBUG oslo_service.service [-] Started child 163839 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.062 163839 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-2000092'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.062 163593 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp1daydb0d/privsep.sock']#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.084 163839 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.084 163839 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.085 163839 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.088 163839 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.095 163839 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.102 163839 INFO eventlet.wsgi.server [-] (163839) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 21 11:17:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:17:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:17:22 np0005590810 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.740 163593 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.741 163593 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1daydb0d/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.606 163844 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.610 163844 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.612 163844 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.612 163844 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163844#033[00m
Jan 21 11:17:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:22.743 163844 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8e6fdb-7f9d-4b0e-8862-5ff4064ca804]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:17:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:23.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161723 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:17:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [NOTICE] 020/161723 (4) : haproxy version is 2.3.17-d1c9119
Jan 21 11:17:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [NOTICE] 020/161723 (4) : path to executable is /usr/local/sbin/haproxy
Jan 21 11:17:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [ALERT] 020/161723 (4) : backend 'backend' has no server available!
Jan 21 11:17:23 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:23.232 163844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:17:23 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:23.232 163844 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:17:23 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:23.232 163844 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:17:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:23.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 21 11:17:23 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:23.793 163844 DEBUG oslo.privsep.daemon [-] privsep: reply[46b9e1eb-0742-4d96-9f53-995fdf7ec46f]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:17:23 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:23.796 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, column=external_ids, values=({'neutron:ovn-metadata-id': 'f8380ddd-3cb3-59e0-ba55-896b8e1df472'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:17:23 np0005590810 python3.9[163976]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 21 11:17:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:17:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.291 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.319 163593 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.319 163593 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.319 163593 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.319 163593 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.319 163593 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.319 163593 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.319 163593 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.319 163593 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.320 163593 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.320 163593 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.320 163593 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.320 163593 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.320 163593 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.320 163593 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.321 163593 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.321 163593 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.321 163593 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.321 163593 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.321 163593 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.321 163593 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.322 163593 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.322 163593 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.322 163593 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.322 163593 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.322 163593 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.322 163593 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.322 163593 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.323 163593 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.323 163593 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.323 163593 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.323 163593 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.323 163593 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.323 163593 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.324 163593 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.324 163593 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.324 163593 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.324 163593 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.324 163593 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.324 163593 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.325 163593 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.325 163593 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.325 163593 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.325 163593 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.325 163593 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.325 163593 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.325 163593 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.326 163593 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.326 163593 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.326 163593 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.326 163593 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.326 163593 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.326 163593 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.326 163593 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.326 163593 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.327 163593 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.327 163593 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.327 163593 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.327 163593 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.327 163593 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.327 163593 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.327 163593 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.327 163593 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.328 163593 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.328 163593 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.328 163593 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.328 163593 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.328 163593 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.328 163593 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.328 163593 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.329 163593 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.329 163593 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.329 163593 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.329 163593 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.329 163593 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.329 163593 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.329 163593 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.330 163593 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.330 163593 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.330 163593 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.330 163593 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.330 163593 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.330 163593 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.331 163593 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.331 163593 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.331 163593 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.331 163593 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.331 163593 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.331 163593 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.332 163593 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.333 163593 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.333 163593 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.333 163593 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.333 163593 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.333 163593 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.333 163593 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.333 163593 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.333 163593 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.334 163593 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.334 163593 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.334 163593 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.334 163593 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.334 163593 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.334 163593 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.334 163593 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.335 163593 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.335 163593 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.335 163593 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.335 163593 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.335 163593 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.335 163593 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.335 163593 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.335 163593 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.336 163593 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.336 163593 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.336 163593 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.336 163593 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.336 163593 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.336 163593 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.336 163593 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.337 163593 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.337 163593 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.337 163593 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.337 163593 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.337 163593 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.337 163593 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.337 163593 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.337 163593 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.338 163593 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.338 163593 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.338 163593 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.338 163593 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.338 163593 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.338 163593 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.338 163593 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.338 163593 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.339 163593 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.339 163593 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.339 163593 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.339 163593 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.339 163593 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.339 163593 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.339 163593 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.340 163593 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.340 163593 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.340 163593 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.340 163593 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.340 163593 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.340 163593 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.340 163593 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.340 163593 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.341 163593 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.341 163593 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.341 163593 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.341 163593 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.341 163593 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.341 163593 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.341 163593 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.341 163593 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.342 163593 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.342 163593 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.342 163593 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.342 163593 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.342 163593 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.342 163593 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.342 163593 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.342 163593 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.343 163593 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.343 163593 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.343 163593 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.343 163593 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.343 163593 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.343 163593 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.343 163593 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.343 163593 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.344 163593 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.344 163593 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.344 163593 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.344 163593 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.344 163593 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.344 163593 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.344 163593 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.345 163593 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.345 163593 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.345 163593 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.345 163593 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.345 163593 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.345 163593 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.345 163593 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.345 163593 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.346 163593 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.347 163593 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.348 163593 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.349 163593 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.350 163593 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.350 163593 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.350 163593 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.350 163593 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.350 163593 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.350 163593 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.350 163593 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.350 163593 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.351 163593 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.352 163593 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.352 163593 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.352 163593 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.352 163593 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.352 163593 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.352 163593 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.352 163593 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.352 163593 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.353 163593 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.353 163593 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.353 163593 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.353 163593 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.353 163593 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.353 163593 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.353 163593 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.353 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.354 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.354 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.354 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.354 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.354 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.354 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.354 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.355 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.355 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.355 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.355 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.355 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.355 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.355 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.356 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.356 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.356 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.356 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.356 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.356 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.356 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.357 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.357 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.357 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.357 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.357 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.357 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.357 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.358 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.358 163593 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.358 163593 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.358 163593 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.358 163593 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.358 163593 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:17:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:17:24.358 163593 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 21 11:17:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:25.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:25 np0005590810 python3.9[164130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:17:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:25.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 255 B/s wr, 5 op/s
Jan 21 11:17:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:25] "GET /metrics HTTP/1.1" 200 48203 "" "Prometheus/2.51.0"
Jan 21 11:17:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:25] "GET /metrics HTTP/1.1" 200 48203 "" "Prometheus/2.51.0"
Jan 21 11:17:25 np0005590810 python3.9[164256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012244.6056876-1422-158135696897925/.source.yaml _original_basename=.p__h6yix follow=False checksum=7107bf55e9e93c71f770b94e86216eb124936f4b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:26 np0005590810 systemd[1]: session-52.scope: Deactivated successfully.
Jan 21 11:17:26 np0005590810 systemd[1]: session-52.scope: Consumed 59.432s CPU time.
Jan 21 11:17:26 np0005590810 systemd-logind[795]: Session 52 logged out. Waiting for processes to exit.
Jan 21 11:17:26 np0005590810 systemd-logind[795]: Removed session 52.
Jan 21 11:17:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:27.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:17:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:27.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:17:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:27.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:27 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:17:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:27 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:17:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:27 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:17:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:27.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 21 11:17:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:17:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:17:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:17:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:17:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:29.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:29 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:17:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:29 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:17:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:29 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:17:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:29.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 21 11:17:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:31 np0005590810 systemd-logind[795]: New session 53 of user zuul.
Jan 21 11:17:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:31 np0005590810 systemd[1]: Started Session 53 of User zuul.
Jan 21 11:17:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1940016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:32 np0005590810 python3.9[164441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:17:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:33 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:17:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:33.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:17:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:17:33 np0005590810 python3.9[164614]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:17:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161734 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:17:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:34 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:34 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:34 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:17:34 np0005590810 podman[164751]: 2026-01-21 16:17:34.621124815 +0000 UTC m=+0.125336017 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 21 11:17:34 np0005590810 python3.9[164799]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 11:17:34 np0005590810 systemd[1]: Reloading.
Jan 21 11:17:34 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:17:34 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:17:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:35 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:17:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:17:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:35.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 21 11:17:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:35] "GET /metrics HTTP/1.1" 200 48203 "" "Prometheus/2.51.0"
Jan 21 11:17:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:35] "GET /metrics HTTP/1.1" 200 48203 "" "Prometheus/2.51.0"
Jan 21 11:17:36 np0005590810 python3.9[164992]: ansible-ansible.builtin.service_facts Invoked
Jan 21 11:17:36 np0005590810 network[165009]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 11:17:36 np0005590810 network[165010]: 'network-scripts' will be removed from distribution in near future.
Jan 21 11:17:36 np0005590810 network[165011]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 11:17:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:36 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:36 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:37.017Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:17:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:37.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:17:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:37 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:37.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:17:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:37 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:17:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:37 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:17:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:38 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:38 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:39 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:39.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:17:39
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['.nfs', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log']
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:17:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:17:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:17:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:39.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:17:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:17:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161739 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:17:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:17:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:41 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:17:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:17:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:41.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 21 11:17:41 np0005590810 python3.9[165304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:17:42 np0005590810 python3.9[165457]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:17:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:42 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:42 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00026e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:42 np0005590810 python3.9[165610]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:17:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:43 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:43.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161743 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:17:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:43.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Jan 21 11:17:43 np0005590810 python3.9[165765]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:17:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:44 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:44 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:44 np0005590810 python3.9[165918]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:17:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:45 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:45.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:45.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 21 11:17:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:45] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Jan 21 11:17:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:45] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Jan 21 11:17:46 np0005590810 python3.9[166073]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:17:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:46 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:46 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:46 np0005590810 python3.9[166226]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:17:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:47.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:17:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:47.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:17:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:47 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:47.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:47.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:17:47 np0005590810 python3.9[166381]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:48 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:48 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:48 np0005590810 python3.9[166533]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:49 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:49.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:49 np0005590810 python3.9[166686]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:17:49 np0005590810 python3.9[166839]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:50 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:50 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:50 np0005590810 podman[166963]: 2026-01-21 16:17:50.319181489 +0000 UTC m=+0.062530844 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 21 11:17:50 np0005590810 python3.9[167009]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:51 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:51 np0005590810 python3.9[167163]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:17:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:17:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:17:51 np0005590810 python3.9[167316]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:52 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:52 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:52 np0005590810 python3.9[167468]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:53 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:53 np0005590810 python3.9[167621]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:53.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 21 11:17:53 np0005590810 python3.9[167774]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:54 np0005590810 python3.9[167926]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:54 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:17:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:17:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:54 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:54 np0005590810 python3.9[168078]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:55 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:55.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:55.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:55 np0005590810 python3.9[168232]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Jan 21 11:17:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:55] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:17:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:17:55] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:17:55 np0005590810 python3.9[168384]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:17:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:56 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:56 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:57.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:17:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:17:57.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:17:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:57 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:17:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:57.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:17:57 np0005590810 python3.9[168537]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:17:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:57.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:17:57 np0005590810 python3.9[168715]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 11:17:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:58 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:58 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:58 np0005590810 python3.9[168867]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 11:17:58 np0005590810 systemd[1]: Reloading.
Jan 21 11:17:58 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:17:58 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:17:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:17:59 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:17:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:17:59.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:17:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:17:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:17:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:17:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:17:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:17:59 np0005590810 python3.9[169055]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:18:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:00 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:00 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:00 np0005590810 python3.9[169208]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:18:00 np0005590810 python3.9[169362]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:18:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:01 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:01.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:01.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:18:01 np0005590810 python3.9[169516]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:18:02 np0005590810 python3.9[169669]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:18:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:02 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:02 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:02 np0005590810 python3.9[169822]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:18:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:03 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:03.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:03 np0005590810 python3.9[169976]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:18:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:03.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:18:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:04 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:04 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:04 np0005590810 podman[170105]: 2026-01-21 16:18:04.960516238 +0000 UTC m=+0.097732405 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 11:18:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:05 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:05 np0005590810 python3.9[170151]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 21 11:18:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:05.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:18:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:05] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:18:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:05] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:18:05 np0005590810 python3.9[170313]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 11:18:05 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:18:05 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:18:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:06 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:06 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:07 np0005590810 python3.9[170473]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 11:18:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:07.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:18:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:07.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:18:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:07.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:18:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:07.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:07 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:07.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:18:08 np0005590810 python3.9[170634]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:18:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:08 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:08 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:09 np0005590810 python3.9[170719]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:18:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:09.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:09 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:18:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:18:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:18:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:18:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:09.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:18:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:18:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:18:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:18:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:18:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:10 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:10 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:11.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:11 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:11.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:18:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:12 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:12 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:13.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:13 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161813 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:18:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:13.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:18:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:14 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:14 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:15.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:15 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:18:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:15.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:18:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:18:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:15] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 21 11:18:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:15] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 21 11:18:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:16 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:16 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:17.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:18:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:17.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:17 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:18:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:18 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:18 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:19.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:19 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:18:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:20 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:20 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:20 np0005590810 podman[170767]: 2026-01-21 16:18:20.692356243 +0000 UTC m=+0.062323528 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:18:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:21.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:18:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:18:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:18:22.005 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:18:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:18:22.006 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:18:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:18:22.006 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:18:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:22 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:22 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:18:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:18:23 np0005590810 podman[170960]: 2026-01-21 16:18:23.033725462 +0000 UTC m=+0.035297096 container create c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:18:23 np0005590810 systemd[1]: Started libpod-conmon-c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6.scope.
Jan 21 11:18:23 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:18:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:23.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:23 np0005590810 podman[170960]: 2026-01-21 16:18:23.017783846 +0000 UTC m=+0.019355500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:18:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:23 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:23 np0005590810 podman[170960]: 2026-01-21 16:18:23.125106482 +0000 UTC m=+0.126678136 container init c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_joliot, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:18:23 np0005590810 podman[170960]: 2026-01-21 16:18:23.133608381 +0000 UTC m=+0.135180015 container start c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:18:23 np0005590810 podman[170960]: 2026-01-21 16:18:23.13717506 +0000 UTC m=+0.138746694 container attach c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 11:18:23 np0005590810 jovial_joliot[170976]: 167 167
Jan 21 11:18:23 np0005590810 systemd[1]: libpod-c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6.scope: Deactivated successfully.
Jan 21 11:18:23 np0005590810 podman[170960]: 2026-01-21 16:18:23.140885942 +0000 UTC m=+0.142457566 container died c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:18:23 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7123e46a8f602b2f4cbb92d44c6ed549cde8b279edaccb720949fef9f4a14bfe-merged.mount: Deactivated successfully.
Jan 21 11:18:23 np0005590810 podman[170960]: 2026-01-21 16:18:23.179280661 +0000 UTC m=+0.180852285 container remove c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 11:18:23 np0005590810 systemd[1]: libpod-conmon-c1cd16f09accd9180e2e1a330a809e742efe5cde52c0fcf20fa88b2cc80f8af6.scope: Deactivated successfully.
Jan 21 11:18:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:23 np0005590810 podman[171002]: 2026-01-21 16:18:23.377748852 +0000 UTC m=+0.044870437 container create c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 21 11:18:23 np0005590810 systemd[1]: Started libpod-conmon-c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1.scope.
Jan 21 11:18:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:18:23 np0005590810 podman[171002]: 2026-01-21 16:18:23.359116574 +0000 UTC m=+0.026238159 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:18:23 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:18:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9449e92f13420c991323fee7b74b97eae8649c9d978ae27a99e17ae60c47ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9449e92f13420c991323fee7b74b97eae8649c9d978ae27a99e17ae60c47ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9449e92f13420c991323fee7b74b97eae8649c9d978ae27a99e17ae60c47ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9449e92f13420c991323fee7b74b97eae8649c9d978ae27a99e17ae60c47ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9449e92f13420c991323fee7b74b97eae8649c9d978ae27a99e17ae60c47ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:23 np0005590810 podman[171002]: 2026-01-21 16:18:23.483344845 +0000 UTC m=+0.150466450 container init c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaplygin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:18:23 np0005590810 podman[171002]: 2026-01-21 16:18:23.496040262 +0000 UTC m=+0.163161837 container start c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaplygin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:18:23 np0005590810 podman[171002]: 2026-01-21 16:18:23.499476256 +0000 UTC m=+0.166597831 container attach c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:18:23 np0005590810 kind_chaplygin[171019]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:18:23 np0005590810 kind_chaplygin[171019]: --> All data devices are unavailable
Jan 21 11:18:23 np0005590810 systemd[1]: libpod-c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1.scope: Deactivated successfully.
Jan 21 11:18:23 np0005590810 podman[171002]: 2026-01-21 16:18:23.881920916 +0000 UTC m=+0.549042511 container died c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:18:23 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5b9449e92f13420c991323fee7b74b97eae8649c9d978ae27a99e17ae60c47ca-merged.mount: Deactivated successfully.
Jan 21 11:18:23 np0005590810 podman[171002]: 2026-01-21 16:18:23.93102639 +0000 UTC m=+0.598147955 container remove c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 21 11:18:23 np0005590810 systemd[1]: libpod-conmon-c8f31fdc2b115213de53eb3692ff75914d8fbb410c9290bca67f166350f4aad1.scope: Deactivated successfully.
Jan 21 11:18:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:18:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:18:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:24 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:24 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:24 np0005590810 podman[171136]: 2026-01-21 16:18:24.510675801 +0000 UTC m=+0.045181686 container create acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:18:24 np0005590810 systemd[1]: Started libpod-conmon-acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d.scope.
Jan 21 11:18:24 np0005590810 podman[171136]: 2026-01-21 16:18:24.489697843 +0000 UTC m=+0.024203778 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:18:24 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:18:24 np0005590810 podman[171136]: 2026-01-21 16:18:24.602815105 +0000 UTC m=+0.137321050 container init acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:18:24 np0005590810 podman[171136]: 2026-01-21 16:18:24.611391926 +0000 UTC m=+0.145897821 container start acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:18:24 np0005590810 podman[171136]: 2026-01-21 16:18:24.616012886 +0000 UTC m=+0.150518821 container attach acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:18:24 np0005590810 modest_ritchie[171152]: 167 167
Jan 21 11:18:24 np0005590810 systemd[1]: libpod-acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d.scope: Deactivated successfully.
Jan 21 11:18:24 np0005590810 podman[171136]: 2026-01-21 16:18:24.620586066 +0000 UTC m=+0.155091951 container died acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:18:24 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dcd2f943f53f1481504be76501770a4f4f158709c27c0849b22bb21a77cd81b6-merged.mount: Deactivated successfully.
Jan 21 11:18:24 np0005590810 podman[171136]: 2026-01-21 16:18:24.659322245 +0000 UTC m=+0.193828140 container remove acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:18:24 np0005590810 systemd[1]: libpod-conmon-acc7a4cda5a6414f3484de677c353098eb8415bee49df70638a1e8725348bd0d.scope: Deactivated successfully.
Jan 21 11:18:24 np0005590810 podman[171176]: 2026-01-21 16:18:24.818771417 +0000 UTC m=+0.039844074 container create c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mendeleev, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 21 11:18:24 np0005590810 systemd[1]: Started libpod-conmon-c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb.scope.
Jan 21 11:18:24 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:18:24 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fccb82a9b42394a426c537062a2a8bcbb4e0d25c10626cfe7ea64e52f41fce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:24 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fccb82a9b42394a426c537062a2a8bcbb4e0d25c10626cfe7ea64e52f41fce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:24 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fccb82a9b42394a426c537062a2a8bcbb4e0d25c10626cfe7ea64e52f41fce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:24 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fccb82a9b42394a426c537062a2a8bcbb4e0d25c10626cfe7ea64e52f41fce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:24 np0005590810 podman[171176]: 2026-01-21 16:18:24.802450601 +0000 UTC m=+0.023523278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:18:24 np0005590810 podman[171176]: 2026-01-21 16:18:24.90725797 +0000 UTC m=+0.128330627 container init c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 11:18:24 np0005590810 podman[171176]: 2026-01-21 16:18:24.913554712 +0000 UTC m=+0.134627379 container start c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:18:24 np0005590810 podman[171176]: 2026-01-21 16:18:24.916477101 +0000 UTC m=+0.137549758 container attach c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:18:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:24 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:18:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:24 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:18:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:25.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:25 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]: {
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:    "0": [
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:        {
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "devices": [
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "/dev/loop3"
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            ],
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "lv_name": "ceph_lv0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "lv_size": "21470642176",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "name": "ceph_lv0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "tags": {
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.cluster_name": "ceph",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.crush_device_class": "",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.encrypted": "0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.osd_id": "0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.type": "block",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.vdo": "0",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:                "ceph.with_tpm": "0"
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            },
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "type": "block",
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:            "vg_name": "ceph_vg0"
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:        }
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]:    ]
Jan 21 11:18:25 np0005590810 pensive_mendeleev[171192]: }
Jan 21 11:18:25 np0005590810 systemd[1]: libpod-c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb.scope: Deactivated successfully.
Jan 21 11:18:25 np0005590810 podman[171176]: 2026-01-21 16:18:25.198456443 +0000 UTC m=+0.419529100 container died c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:18:25 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f1fccb82a9b42394a426c537062a2a8bcbb4e0d25c10626cfe7ea64e52f41fce-merged.mount: Deactivated successfully.
Jan 21 11:18:25 np0005590810 podman[171176]: 2026-01-21 16:18:25.235568703 +0000 UTC m=+0.456641360 container remove c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_mendeleev, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:18:25 np0005590810 systemd[1]: libpod-conmon-c68a14fd5d4c8171a57f8f152c54419be08425e7355b9efb7e48ceeebad286fb.scope: Deactivated successfully.
Jan 21 11:18:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:18:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:25] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Jan 21 11:18:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:25] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Jan 21 11:18:25 np0005590810 podman[171305]: 2026-01-21 16:18:25.778563878 +0000 UTC m=+0.044088352 container create ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:18:25 np0005590810 systemd[1]: Started libpod-conmon-ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c.scope.
Jan 21 11:18:25 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:18:25 np0005590810 podman[171305]: 2026-01-21 16:18:25.836708187 +0000 UTC m=+0.102232681 container init ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 21 11:18:25 np0005590810 podman[171305]: 2026-01-21 16:18:25.843313779 +0000 UTC m=+0.108838263 container start ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:18:25 np0005590810 podman[171305]: 2026-01-21 16:18:25.846406163 +0000 UTC m=+0.111930697 container attach ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 21 11:18:25 np0005590810 inspiring_jackson[171322]: 167 167
Jan 21 11:18:25 np0005590810 systemd[1]: libpod-ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c.scope: Deactivated successfully.
Jan 21 11:18:25 np0005590810 podman[171305]: 2026-01-21 16:18:25.848797596 +0000 UTC m=+0.114322070 container died ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 21 11:18:25 np0005590810 podman[171305]: 2026-01-21 16:18:25.76154641 +0000 UTC m=+0.027070904 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:18:25 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8732f535845680af512574c51071a39029559fc4050d8ec3e6493c1fd3caa569-merged.mount: Deactivated successfully.
Jan 21 11:18:25 np0005590810 podman[171305]: 2026-01-21 16:18:25.887298337 +0000 UTC m=+0.152822801 container remove ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:18:25 np0005590810 systemd[1]: libpod-conmon-ff5b8f771e4c051946e298ff73c9b2f69417dd60b55a8c586be53442b341d87c.scope: Deactivated successfully.
Jan 21 11:18:26 np0005590810 podman[171347]: 2026-01-21 16:18:26.045784351 +0000 UTC m=+0.048013602 container create 705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:18:26 np0005590810 systemd[1]: Started libpod-conmon-705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3.scope.
Jan 21 11:18:26 np0005590810 podman[171347]: 2026-01-21 16:18:26.020096429 +0000 UTC m=+0.022325680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:18:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:18:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8846107a198b90a15f3ee4dfb1c865507eb57f36c68efd6ebc9dc09e7d22238/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8846107a198b90a15f3ee4dfb1c865507eb57f36c68efd6ebc9dc09e7d22238/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8846107a198b90a15f3ee4dfb1c865507eb57f36c68efd6ebc9dc09e7d22238/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8846107a198b90a15f3ee4dfb1c865507eb57f36c68efd6ebc9dc09e7d22238/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:18:26 np0005590810 podman[171347]: 2026-01-21 16:18:26.139783592 +0000 UTC m=+0.142012813 container init 705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_tharp, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 21 11:18:26 np0005590810 podman[171347]: 2026-01-21 16:18:26.147882798 +0000 UTC m=+0.150112019 container start 705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_tharp, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:18:26 np0005590810 podman[171347]: 2026-01-21 16:18:26.151430506 +0000 UTC m=+0.153659907 container attach 705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:18:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:26 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:26 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:26 np0005590810 lvm[171438]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:18:26 np0005590810 lvm[171438]: VG ceph_vg0 finished
Jan 21 11:18:26 np0005590810 silly_tharp[171363]: {}
Jan 21 11:18:26 np0005590810 systemd[1]: libpod-705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3.scope: Deactivated successfully.
Jan 21 11:18:26 np0005590810 systemd[1]: libpod-705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3.scope: Consumed 1.014s CPU time.
Jan 21 11:18:26 np0005590810 podman[171347]: 2026-01-21 16:18:26.824170231 +0000 UTC m=+0.826399442 container died 705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_tharp, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:18:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d8846107a198b90a15f3ee4dfb1c865507eb57f36c68efd6ebc9dc09e7d22238-merged.mount: Deactivated successfully.
Jan 21 11:18:26 np0005590810 podman[171347]: 2026-01-21 16:18:26.865733906 +0000 UTC m=+0.867963127 container remove 705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_tharp, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:18:26 np0005590810 systemd[1]: libpod-conmon-705883054a71b582a10f7b75b425619d243a7893c8b4d16b87446f3d6c9a43f3.scope: Deactivated successfully.
Jan 21 11:18:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:18:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:18:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:18:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:18:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:27.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:18:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:27.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:18:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:27.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:27 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:27.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:18:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:28 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:18:28 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:18:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000015:nfs.cephfs.2: -2
Jan 21 11:18:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:18:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:29.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:29 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe198001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:29.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:18:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:30 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:30 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:31.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:31 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:31.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:18:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:33.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:33 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161833 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:18:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:33.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 21 11:18:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:34 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:34 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:18:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:35.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:18:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:35 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:35.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:18:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:35] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Jan 21 11:18:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:35] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Jan 21 11:18:35 np0005590810 podman[171661]: 2026-01-21 16:18:35.735612765 +0000 UTC m=+0.101513041 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 11:18:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:36 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:36 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:37.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:18:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:37.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:37 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:37.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:18:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:38 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:38 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:39.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:39 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:18:39
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'images', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.nfs']
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:18:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:18:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:18:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:39.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:18:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:18:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe198001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:41.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:41 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:18:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:41.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:18:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:18:41 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Check health
Jan 21 11:18:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161841 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:18:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:42 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:42 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:43 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:18:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:44 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:44 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0004e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:45 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:18:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:45] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 21 11:18:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:45] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 21 11:18:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:46 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:46 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:47.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:18:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:47.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:18:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:47.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:47 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0004e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:47.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:18:48 np0005590810 kernel: SELinux:  Converting 2781 SID table entries...
Jan 21 11:18:48 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 11:18:48 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 11:18:48 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 11:18:48 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 11:18:48 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 11:18:48 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 11:18:48 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 11:18:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:48 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:48 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:49.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:49 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:49.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:18:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:49 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:18:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:50 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:50 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:51 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:51.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:18:51 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 21 11:18:51 np0005590810 podman[171745]: 2026-01-21 16:18:51.689980955 +0000 UTC m=+0.059632766 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 21 11:18:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:52 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:52 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:52 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:18:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:52 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:18:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:53.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:53 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:53.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:18:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:18:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:18:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:54 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:54 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:55.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:55 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:55.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:18:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:55 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:18:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:55] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 21 11:18:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:18:55] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 21 11:18:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:56 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:56 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:57.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:18:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:18:57.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:18:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:57.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:57 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:18:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:57.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:18:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 21 11:18:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:58 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:58 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:18:59.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:18:59 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:18:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:18:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:18:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:18:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:18:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:18:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 21 11:19:00 np0005590810 kernel: SELinux:  Converting 2781 SID table entries...
Jan 21 11:19:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:00 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:00 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 11:19:00 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 11:19:00 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 11:19:00 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 11:19:00 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 11:19:00 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 11:19:00 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 11:19:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:00 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:01.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:01 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:01.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:19:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=cleanup t=2026-01-21T16:19:01.646869686Z level=info msg="Completed cleanup jobs" duration=38.502742ms
Jan 21 11:19:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=plugins.update.checker t=2026-01-21T16:19:01.730607864Z level=info msg="Update check succeeded" duration=52.23551ms
Jan 21 11:19:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=grafana.update.checker t=2026-01-21T16:19:01.732863172Z level=info msg="Update check succeeded" duration=53.382304ms
Jan 21 11:19:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161901 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:19:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:02 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:02 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:03.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:03 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:19:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:04 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:04 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:05.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:05 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:19:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:05.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:19:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:19:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:05] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 21 11:19:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:05] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 21 11:19:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:06 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:06 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:06 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 21 11:19:06 np0005590810 podman[171810]: 2026-01-21 16:19:06.728478851 +0000 UTC m=+0.097685244 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 11:19:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:07.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:19:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:07.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:07 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:07.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:19:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:08 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:08 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:09.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:09 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:19:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:19:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:19:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:19:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:19:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:19:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:19:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:19:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:19:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:10 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:10 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:11.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:11 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.002000062s ======
Jan 21 11:19:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:11.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Jan 21 11:19:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:19:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:12 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:12 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:13 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:13.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:19:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:14 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:14 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:15 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:19:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:15] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:19:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:15] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 21 11:19:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:16 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:16 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:17.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:19:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:17.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:19:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:17.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:17 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161917 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 3ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:19:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:17.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:19:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:18 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:18 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:19.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:19 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180003b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:19.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:19:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:20 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:20 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:21.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:21.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:19:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:19:22.007 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:19:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:19:22.007 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:19:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:19:22.007 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:19:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:22 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:22 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:22 np0005590810 podman[177658]: 2026-01-21 16:19:22.702458813 +0000 UTC m=+0.077080303 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:19:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:23.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:23 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:23.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 21 11:19:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161923 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:19:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:19:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:19:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:24 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:24 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:25.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:25 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:25.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:19:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:25] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:19:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:25] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:19:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:26 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:26 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:27.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:19:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:27.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:27 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:27.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:19:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:27 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:19:28 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 11:19:28 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:19:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:28 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:19:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:29.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:29 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:19:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:19:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:29.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:19:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 21 11:19:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:19:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:30 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:30 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184001ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:30 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:19:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:30 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:19:30 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:30 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:30 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:19:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:31 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:19:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:31 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:19:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:31 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:31.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:19:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:19:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:19:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:19:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 470 B/s wr, 1 op/s
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:19:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:33.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:19:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:33 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184001ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:33.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 21 11:19:33 np0005590810 podman[184303]: 2026-01-21 16:19:33.7569142 +0000 UTC m=+0.046259738 container create cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:19:33 np0005590810 systemd[1]: Started libpod-conmon-cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6.scope.
Jan 21 11:19:33 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:19:33 np0005590810 podman[184303]: 2026-01-21 16:19:33.739068154 +0000 UTC m=+0.028413742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:19:33 np0005590810 podman[184303]: 2026-01-21 16:19:33.846845237 +0000 UTC m=+0.136190775 container init cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_aryabhata, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 11:19:33 np0005590810 podman[184303]: 2026-01-21 16:19:33.855635457 +0000 UTC m=+0.144980995 container start cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 21 11:19:33 np0005590810 podman[184303]: 2026-01-21 16:19:33.859951909 +0000 UTC m=+0.149297497 container attach cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_aryabhata, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:19:33 np0005590810 admiring_aryabhata[184381]: 167 167
Jan 21 11:19:33 np0005590810 systemd[1]: libpod-cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6.scope: Deactivated successfully.
Jan 21 11:19:33 np0005590810 podman[184303]: 2026-01-21 16:19:33.864270811 +0000 UTC m=+0.153616349 container died cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_aryabhata, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:19:33 np0005590810 ceph-mon[74380]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 21 11:19:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-930b3f5bf7a453f956cc0a21b1fc9451632ae9affdb6eedf6fb04ac1f8906310-merged.mount: Deactivated successfully.
Jan 21 11:19:33 np0005590810 podman[184303]: 2026-01-21 16:19:33.91344907 +0000 UTC m=+0.202794608 container remove cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:19:33 np0005590810 systemd[1]: libpod-conmon-cc7e5efbf4371d91a5c89f8a53a6da32c30585e45638c921755b427abd7157d6.scope: Deactivated successfully.
Jan 21 11:19:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:34 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:19:34 np0005590810 podman[184559]: 2026-01-21 16:19:34.09714175 +0000 UTC m=+0.048383114 container create 9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_rhodes, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:19:34 np0005590810 systemd[1]: Started libpod-conmon-9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1.scope.
Jan 21 11:19:34 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:19:34 np0005590810 podman[184559]: 2026-01-21 16:19:34.077215439 +0000 UTC m=+0.028456823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:19:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6eeb7ffb1a775605d348c2ddd2ffb27dda635f187ffbe2427a86a455150fe3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6eeb7ffb1a775605d348c2ddd2ffb27dda635f187ffbe2427a86a455150fe3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6eeb7ffb1a775605d348c2ddd2ffb27dda635f187ffbe2427a86a455150fe3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6eeb7ffb1a775605d348c2ddd2ffb27dda635f187ffbe2427a86a455150fe3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6eeb7ffb1a775605d348c2ddd2ffb27dda635f187ffbe2427a86a455150fe3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:34 np0005590810 podman[184559]: 2026-01-21 16:19:34.191373478 +0000 UTC m=+0.142614872 container init 9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:19:34 np0005590810 podman[184559]: 2026-01-21 16:19:34.201577802 +0000 UTC m=+0.152819186 container start 9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:19:34 np0005590810 podman[184559]: 2026-01-21 16:19:34.205979876 +0000 UTC m=+0.157221250 container attach 9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:19:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:34 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 470 B/s wr, 1 op/s
Jan 21 11:19:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:34 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:34 np0005590810 frosty_rhodes[184641]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:19:34 np0005590810 frosty_rhodes[184641]: --> All data devices are unavailable
Jan 21 11:19:34 np0005590810 systemd[1]: libpod-9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1.scope: Deactivated successfully.
Jan 21 11:19:34 np0005590810 conmon[184641]: conmon 9ace4fc5c293579ac6d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1.scope/container/memory.events
Jan 21 11:19:34 np0005590810 podman[184559]: 2026-01-21 16:19:34.557654776 +0000 UTC m=+0.508896140 container died 9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 11:19:34 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d6eeb7ffb1a775605d348c2ddd2ffb27dda635f187ffbe2427a86a455150fe3c-merged.mount: Deactivated successfully.
Jan 21 11:19:34 np0005590810 podman[184559]: 2026-01-21 16:19:34.608264668 +0000 UTC m=+0.559506042 container remove 9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:19:34 np0005590810 systemd[1]: libpod-conmon-9ace4fc5c293579ac6d2ba6fa6fd7b65b32492df0397db5860dd9847e3ade6d1.scope: Deactivated successfully.
Jan 21 11:19:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:35.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:35 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:35 np0005590810 podman[185354]: 2026-01-21 16:19:35.25750944 +0000 UTC m=+0.048688513 container create a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ritchie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:19:35 np0005590810 systemd[1]: Started libpod-conmon-a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27.scope.
Jan 21 11:19:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:19:35 np0005590810 podman[185354]: 2026-01-21 16:19:35.236857447 +0000 UTC m=+0.028036530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:19:35 np0005590810 podman[185354]: 2026-01-21 16:19:35.344091555 +0000 UTC m=+0.135270648 container init a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:19:35 np0005590810 podman[185354]: 2026-01-21 16:19:35.351965746 +0000 UTC m=+0.143144829 container start a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:19:35 np0005590810 podman[185354]: 2026-01-21 16:19:35.355566486 +0000 UTC m=+0.146745579 container attach a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:19:35 np0005590810 intelligent_ritchie[185431]: 167 167
Jan 21 11:19:35 np0005590810 systemd[1]: libpod-a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27.scope: Deactivated successfully.
Jan 21 11:19:35 np0005590810 conmon[185431]: conmon a50afb3ffdcb29a73154 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27.scope/container/memory.events
Jan 21 11:19:35 np0005590810 podman[185354]: 2026-01-21 16:19:35.360736405 +0000 UTC m=+0.151915508 container died a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 11:19:35 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c117238339013b9e2a2c74891123519c1e5e7728d1f5faaed773ec97c0c07612-merged.mount: Deactivated successfully.
Jan 21 11:19:35 np0005590810 podman[185354]: 2026-01-21 16:19:35.416806134 +0000 UTC m=+0.207985227 container remove a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:19:35 np0005590810 systemd[1]: libpod-conmon-a50afb3ffdcb29a7315469bc7735f67172b9c562758049a77f7c8423a5491c27.scope: Deactivated successfully.
Jan 21 11:19:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:35] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:19:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:35] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 21 11:19:35 np0005590810 podman[185602]: 2026-01-21 16:19:35.617200217 +0000 UTC m=+0.052127499 container create 4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_black, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:19:35 np0005590810 systemd[1]: Started libpod-conmon-4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677.scope.
Jan 21 11:19:35 np0005590810 podman[185602]: 2026-01-21 16:19:35.594978185 +0000 UTC m=+0.029905497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:19:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:19:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2ef5a89ec597f99c285804fa3602c9cf3e40f2e5cbeafa835157517c78c8c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2ef5a89ec597f99c285804fa3602c9cf3e40f2e5cbeafa835157517c78c8c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2ef5a89ec597f99c285804fa3602c9cf3e40f2e5cbeafa835157517c78c8c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2ef5a89ec597f99c285804fa3602c9cf3e40f2e5cbeafa835157517c78c8c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:35 np0005590810 podman[185602]: 2026-01-21 16:19:35.716973755 +0000 UTC m=+0.151901047 container init 4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:19:35 np0005590810 podman[185602]: 2026-01-21 16:19:35.724208597 +0000 UTC m=+0.159135899 container start 4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_black, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:19:35 np0005590810 podman[185602]: 2026-01-21 16:19:35.729443977 +0000 UTC m=+0.164371269 container attach 4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_black, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 21 11:19:36 np0005590810 youthful_black[185687]: {
Jan 21 11:19:36 np0005590810 youthful_black[185687]:    "0": [
Jan 21 11:19:36 np0005590810 youthful_black[185687]:        {
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "devices": [
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "/dev/loop3"
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            ],
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "lv_name": "ceph_lv0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "lv_size": "21470642176",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "name": "ceph_lv0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "tags": {
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.cluster_name": "ceph",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.crush_device_class": "",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.encrypted": "0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.osd_id": "0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.type": "block",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.vdo": "0",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:                "ceph.with_tpm": "0"
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            },
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "type": "block",
Jan 21 11:19:36 np0005590810 youthful_black[185687]:            "vg_name": "ceph_vg0"
Jan 21 11:19:36 np0005590810 youthful_black[185687]:        }
Jan 21 11:19:36 np0005590810 youthful_black[185687]:    ]
Jan 21 11:19:36 np0005590810 youthful_black[185687]: }
Jan 21 11:19:36 np0005590810 systemd[1]: libpod-4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677.scope: Deactivated successfully.
Jan 21 11:19:36 np0005590810 podman[185602]: 2026-01-21 16:19:36.083352776 +0000 UTC m=+0.518280058 container died 4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:19:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:36 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184001ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Jan 21 11:19:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:36 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:36 np0005590810 systemd[1]: var-lib-containers-storage-overlay-2d2ef5a89ec597f99c285804fa3602c9cf3e40f2e5cbeafa835157517c78c8c1-merged.mount: Deactivated successfully.
Jan 21 11:19:36 np0005590810 podman[185602]: 2026-01-21 16:19:36.669726391 +0000 UTC m=+1.104653663 container remove 4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_black, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:19:36 np0005590810 systemd[1]: libpod-conmon-4d7aa7da36a91a1c5bb4637bb782b7b4b38aa69448d85fe51ad757367fdd9677.scope: Deactivated successfully.
Jan 21 11:19:36 np0005590810 podman[186400]: 2026-01-21 16:19:36.98645133 +0000 UTC m=+0.133541814 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 11:19:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:37.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:19:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:37 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:19:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:37.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:37 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:37 np0005590810 podman[186771]: 2026-01-21 16:19:37.352859842 +0000 UTC m=+0.043972808 container create b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:19:37 np0005590810 systemd[1]: Started libpod-conmon-b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169.scope.
Jan 21 11:19:37 np0005590810 podman[186771]: 2026-01-21 16:19:37.331599571 +0000 UTC m=+0.022712557 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:19:37 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:19:37 np0005590810 podman[186771]: 2026-01-21 16:19:37.454430426 +0000 UTC m=+0.145543412 container init b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:19:37 np0005590810 podman[186771]: 2026-01-21 16:19:37.461873534 +0000 UTC m=+0.152986500 container start b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:19:37 np0005590810 podman[186771]: 2026-01-21 16:19:37.465379522 +0000 UTC m=+0.156492488 container attach b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_diffie, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:19:37 np0005590810 suspicious_diffie[186860]: 167 167
Jan 21 11:19:37 np0005590810 podman[186771]: 2026-01-21 16:19:37.468288511 +0000 UTC m=+0.159401497 container died b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 11:19:37 np0005590810 systemd[1]: libpod-b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169.scope: Deactivated successfully.
Jan 21 11:19:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:37.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:37 np0005590810 systemd[1]: var-lib-containers-storage-overlay-72ec5730a69905b47b32210e2f24f5f88ea21245f5116429b6e2c1d934782b1a-merged.mount: Deactivated successfully.
Jan 21 11:19:37 np0005590810 podman[186771]: 2026-01-21 16:19:37.541531016 +0000 UTC m=+0.232643982 container remove b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:19:37 np0005590810 systemd[1]: libpod-conmon-b299f7cb259658b40de308509c2ce8adcad31c398fe4a3b64b74476b2386d169.scope: Deactivated successfully.
Jan 21 11:19:37 np0005590810 podman[187065]: 2026-01-21 16:19:37.754758082 +0000 UTC m=+0.071961497 container create fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:19:37 np0005590810 systemd[1]: Started libpod-conmon-fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde.scope.
Jan 21 11:19:37 np0005590810 podman[187065]: 2026-01-21 16:19:37.733541642 +0000 UTC m=+0.050745107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:19:37 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:19:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1871eafc5b36a15112f82ea7f0fd4b0f9af730e86117eacfd016553cf8dc334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1871eafc5b36a15112f82ea7f0fd4b0f9af730e86117eacfd016553cf8dc334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1871eafc5b36a15112f82ea7f0fd4b0f9af730e86117eacfd016553cf8dc334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1871eafc5b36a15112f82ea7f0fd4b0f9af730e86117eacfd016553cf8dc334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:19:37 np0005590810 podman[187065]: 2026-01-21 16:19:37.852092576 +0000 UTC m=+0.169296011 container init fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_booth, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:19:37 np0005590810 podman[187065]: 2026-01-21 16:19:37.860413901 +0000 UTC m=+0.177617326 container start fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:19:37 np0005590810 podman[187065]: 2026-01-21 16:19:37.864358832 +0000 UTC m=+0.181562237 container attach fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:19:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:38 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Jan 21 11:19:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:38 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184001ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:38 np0005590810 lvm[187691]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:19:38 np0005590810 lvm[187691]: VG ceph_vg0 finished
Jan 21 11:19:38 np0005590810 mystifying_booth[187177]: {}
Jan 21 11:19:38 np0005590810 systemd[1]: libpod-fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde.scope: Deactivated successfully.
Jan 21 11:19:38 np0005590810 systemd[1]: libpod-fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde.scope: Consumed 1.299s CPU time.
Jan 21 11:19:38 np0005590810 podman[187065]: 2026-01-21 16:19:38.655959258 +0000 UTC m=+0.973162673 container died fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_booth, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:19:38 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b1871eafc5b36a15112f82ea7f0fd4b0f9af730e86117eacfd016553cf8dc334-merged.mount: Deactivated successfully.
Jan 21 11:19:38 np0005590810 podman[187065]: 2026-01-21 16:19:38.708961133 +0000 UTC m=+1.026164548 container remove fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_booth, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:19:38 np0005590810 systemd[1]: libpod-conmon-fddbb64893901e7ed6dc2e9f4c110bc974b14801d378272741a65b6e0becbdde.scope: Deactivated successfully.
Jan 21 11:19:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:19:39
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'images', '.nfs', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data']
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:19:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:39.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:39 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 21 11:19:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:19:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161939 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:19:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:39.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:19:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:19:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:19:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:19:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:19:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:40 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:40 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:40 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:19:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 21 11:19:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:40 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:41.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:41 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:41 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:19:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:42 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 21 11:19:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:42 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:43.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:43 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:43 np0005590810 ceph-osd[82794]: bluestore.MempoolThread fragmentation_score=0.000025 took=0.000057s
Jan 21 11:19:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:43.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/161943 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:19:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:44 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 21 11:19:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:44 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:45.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:45 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:45.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:45] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 21 11:19:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:45] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 21 11:19:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:46 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 21 11:19:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:46 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:47.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:19:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:47.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:19:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:47.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:19:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:47.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:47 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:47.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:48 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:19:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:48 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:49.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:49 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a00033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:50 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:19:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:50 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe194003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:51.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:51 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:51.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:52 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0004e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:19:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:52 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:53 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:53.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:53 np0005590810 podman[189539]: 2026-01-21 16:19:53.680125454 +0000 UTC m=+0.057986439 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 21 11:19:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:19:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:19:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:54 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:19:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:54 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1a0004e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:55.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:55 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:55.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:55] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:19:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:19:55] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:19:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:56 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:19:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:56 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:56 np0005590810 kernel: SELinux:  Converting 2782 SID table entries...
Jan 21 11:19:56 np0005590810 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 11:19:56 np0005590810 kernel: SELinux:  policy capability open_perms=1
Jan 21 11:19:56 np0005590810 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 11:19:56 np0005590810 kernel: SELinux:  policy capability always_check_network=0
Jan 21 11:19:56 np0005590810 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 11:19:56 np0005590810 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 11:19:56 np0005590810 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 11:19:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:57.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:19:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:19:57.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:19:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:57 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:57.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:19:57 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 21 11:19:58 np0005590810 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Jan 21 11:19:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:58 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:58 np0005590810 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Jan 21 11:19:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:19:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:58 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe178000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:19:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:19:59.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:19:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:19:59 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:19:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:19:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:19:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:19:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:19:59.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Jan 21 11:20:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Jan 21 11:20:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.cbyxlf on compute-2 is in unknown state
Jan 21 11:20:00 np0005590810 ceph-mon[74380]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Jan 21 11:20:00 np0005590810 ceph-mon[74380]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Jan 21 11:20:00 np0005590810 ceph-mon[74380]:    daemon nfs.cephfs.1.0.compute-2.cbyxlf on compute-2 is in unknown state
Jan 21 11:20:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:00 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:00 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:01.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:01 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:01.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:02 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:02 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:03 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:03.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:04 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:04 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:20:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:20:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:05 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:05] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:20:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:05] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:20:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:06 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:20:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:06 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:06 np0005590810 systemd[1]: Stopping OpenSSH server daemon...
Jan 21 11:20:06 np0005590810 systemd[1]: sshd.service: Deactivated successfully.
Jan 21 11:20:06 np0005590810 systemd[1]: Stopped OpenSSH server daemon.
Jan 21 11:20:06 np0005590810 systemd[1]: sshd.service: Consumed 2.365s CPU time, read 32.0K from disk, written 0B to disk.
Jan 21 11:20:06 np0005590810 systemd[1]: Stopped target sshd-keygen.target.
Jan 21 11:20:06 np0005590810 systemd[1]: Stopping sshd-keygen.target...
Jan 21 11:20:06 np0005590810 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 11:20:06 np0005590810 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 11:20:06 np0005590810 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 11:20:06 np0005590810 systemd[1]: Reached target sshd-keygen.target.
Jan 21 11:20:06 np0005590810 systemd[1]: Starting OpenSSH server daemon...
Jan 21 11:20:06 np0005590810 systemd[1]: Started OpenSSH server daemon.
Jan 21 11:20:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:07.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:07 np0005590810 podman[190579]: 2026-01-21 16:20:07.138329649 +0000 UTC m=+0.104540766 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:20:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:07.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:07 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1780016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:08 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:08 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:08 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 11:20:08 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 11:20:08 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:08 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:08 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:09.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:09 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 11:20:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:09 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:20:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:20:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:20:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:20:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:09.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:20:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:20:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:20:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:20:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:10 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe178002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:10 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:11.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:11 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:11.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:12 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:12 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe178002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:13.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:13 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:13.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:13 np0005590810 python3.9[195006]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 11:20:13 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:13 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:13 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:14 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:14 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:15 np0005590810 python3.9[196012]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 11:20:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:15.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:15 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:15 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:15 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:15 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:15.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:15] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:20:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:15] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:20:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:16 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:20:16 np0005590810 python3.9[197181]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 11:20:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:16 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe1980045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:16 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:16 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:16 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:17.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:17.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:17 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:17.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:17 np0005590810 python3.9[198391]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 11:20:17 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:17 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:17 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:18 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:18 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:18 np0005590810 python3.9[199712]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:18 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:19 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:19 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:19.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:19 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe178002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:19 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 11:20:19 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 11:20:19 np0005590810 systemd[1]: man-db-cache-update.service: Consumed 12.258s CPU time.
Jan 21 11:20:19 np0005590810 systemd[1]: run-r40329fb9ff844979b67267a749d7353e.service: Deactivated successfully.
Jan 21 11:20:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:19.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:20 np0005590810 python3.9[200348]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:20 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:20 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:20 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:20 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:20 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:21 np0005590810 python3.9[200539]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:21.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:21 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:21 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:21 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:21 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:21.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:20:22.008 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:20:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:20:22.009 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:20:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:20:22.009 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:20:22 np0005590810 python3.9[200729]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:22 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe178002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:22 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:23 np0005590810 python3.9[200885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:23 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:20:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:23.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:20:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:23 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:23 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:23 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:20:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:20:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:24 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:24 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe178002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:24 np0005590810 podman[200949]: 2026-01-21 16:20:24.710386937 +0000 UTC m=+0.073631577 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 21 11:20:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:25.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:25 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:25.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 21 11:20:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:25] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 21 11:20:25 np0005590810 python3.9[201097]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 11:20:25 np0005590810 systemd[1]: Reloading.
Jan 21 11:20:26 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:20:26 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:20:26 np0005590810 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 21 11:20:26 np0005590810 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 21 11:20:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:26 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:20:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:26 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188001f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:27.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:27 np0005590810 python3.9[201291]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:27.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:27 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe178002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:27.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162027 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:20:28 np0005590810 python3.9[201447]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:28 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:28 np0005590810 python3.9[201602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:28 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:45572] [POST] [200] [0.002s] [4.0B] [ef7c7125-c29b-4130-814b-7c14acf6a6a4] /api/prometheus_receiver
Jan 21 11:20:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:29.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:29 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe188001f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:29.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:29 np0005590810 python3.9[201759]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:30 np0005590810 python3.9[201914]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:30 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe178004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:30 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe184003910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:31.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:31 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:31 np0005590810 python3.9[202070]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:31.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:32 np0005590810 python3.9[202226]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:32 np0005590810 kernel: ganesha.nfsd[189536]: segfault at 50 ip 00007fe22fa4432e sp 00007fe1b57f9210 error 4 in libntirpc.so.5.8[7fe22fa29000+2c000] likely on CPU 1 (core 0, socket 1)
Jan 21 11:20:32 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:20:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[163771]: 21/01/2026 16:20:32 : epoch 6970fc11 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe180002690 fd 48 proxy ignored for local
Jan 21 11:20:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:32 np0005590810 systemd[1]: Started Process Core Dump (PID 202277/UID 0).
Jan 21 11:20:32 np0005590810 python3.9[202383]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:33.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:33 np0005590810 systemd-coredump[202283]: Process 163776 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 62:#012#0  0x00007fe22fa4432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:20:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:33.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:33 np0005590810 systemd[1]: systemd-coredump@7-202277-0.service: Deactivated successfully.
Jan 21 11:20:33 np0005590810 systemd[1]: systemd-coredump@7-202277-0.service: Consumed 1.192s CPU time.
Jan 21 11:20:33 np0005590810 podman[202545]: 2026-01-21 16:20:33.710941374 +0000 UTC m=+0.030631520 container died 70f7aa716e185736961e1bd7d3a67b35aa5899fc3b90af366d180aced2926f5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:20:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-fa7f1508d40ef1f005fc53357ac3987cf56feb2c1983fb43ede6c8a84491d44e-merged.mount: Deactivated successfully.
Jan 21 11:20:33 np0005590810 podman[202545]: 2026-01-21 16:20:33.756506108 +0000 UTC m=+0.076196184 container remove 70f7aa716e185736961e1bd7d3a67b35aa5899fc3b90af366d180aced2926f5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:20:33 np0005590810 python3.9[202540]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:33 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:20:33 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:20:33 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.700s CPU time.
Jan 21 11:20:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:20:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:34 np0005590810 python3.9[202740]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:35.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:35 np0005590810 python3.9[202897]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:35.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:35] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 21 11:20:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:35] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 21 11:20:36 np0005590810 python3.9[203052]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:20:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:37.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:37 np0005590810 python3.9[203208]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:37.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:37 np0005590810 podman[203336]: 2026-01-21 16:20:37.586561218 +0000 UTC m=+0.080934959 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 21 11:20:37 np0005590810 python3.9[203383]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 11:20:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162038 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:20:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:20:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:38.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:20:39
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.nfs', 'volumes', 'default.rgw.log', 'backups', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:20:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:20:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:20:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:20:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:39.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:20:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:20:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:39.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:20:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:20:39 np0005590810 python3.9[203572]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:20:40 np0005590810 python3.9[203724]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:20:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Jan 21 11:20:40 np0005590810 auditd[702]: Audit daemon rotating log files
Jan 21 11:20:40 np0005590810 python3.9[203927]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:20:41 np0005590810 podman[204049]: 2026-01-21 16:20:41.234795096 +0000 UTC m=+0.067830661 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:20:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:41.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:41 np0005590810 podman[204049]: 2026-01-21 16:20:41.327663955 +0000 UTC m=+0.160699290 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 21 11:20:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:41.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:41 np0005590810 python3.9[204207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:20:41 np0005590810 podman[204296]: 2026-01-21 16:20:41.846102396 +0000 UTC m=+0.072829012 container exec 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:20:41 np0005590810 podman[204296]: 2026-01-21 16:20:41.856678657 +0000 UTC m=+0.083405283 container exec_died 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:20:42 np0005590810 python3.9[204515]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:20:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 21 11:20:42 np0005590810 podman[204563]: 2026-01-21 16:20:42.447861707 +0000 UTC m=+0.087147137 container exec 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:20:42 np0005590810 podman[204563]: 2026-01-21 16:20:42.47267175 +0000 UTC m=+0.111957150 container exec_died 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:20:42 np0005590810 podman[204728]: 2026-01-21 16:20:42.730207389 +0000 UTC m=+0.071057598 container exec e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, release=1793, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, version=2.2.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph)
Jan 21 11:20:42 np0005590810 podman[204728]: 2026-01-21 16:20:42.756835328 +0000 UTC m=+0.097685517 container exec_died e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, distribution-scope=public, vcs-type=git, release=1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 21 11:20:42 np0005590810 podman[204850]: 2026-01-21 16:20:42.987890093 +0000 UTC m=+0.064931142 container exec 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:20:42 np0005590810 python3.9[204822]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:20:43 np0005590810 podman[204850]: 2026-01-21 16:20:43.023680331 +0000 UTC m=+0.100721380 container exec_died 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:20:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:43.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:43 np0005590810 podman[204951]: 2026-01-21 16:20:43.381638339 +0000 UTC m=+0.198512838 container exec 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:20:43 np0005590810 podman[204951]: 2026-01-21 16:20:43.568907625 +0000 UTC m=+0.385782154 container exec_died 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:20:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:43.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:43 np0005590810 python3.9[205106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:20:43 np0005590810 podman[205189]: 2026-01-21 16:20:43.957101131 +0000 UTC m=+0.052443963 container exec 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:20:43 np0005590810 podman[205189]: 2026-01-21 16:20:43.9860321 +0000 UTC m=+0.081374952 container exec_died 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:20:44 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 8.
Jan 21 11:20:44 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:20:44 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.700s CPU time.
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:20:44 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:44 np0005590810 podman[205405]: 2026-01-21 16:20:44.274188449 +0000 UTC m=+0.051079242 container create b850a4cad2271834d01e4ef2e027fc8f338a3b460a9ee8a1b9f6ef7b3038386a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:20:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842b9a9cd94ed67fe499e26d3db00a4fee9ab6b522f97f5582a76e6c4ba4cdf1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842b9a9cd94ed67fe499e26d3db00a4fee9ab6b522f97f5582a76e6c4ba4cdf1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842b9a9cd94ed67fe499e26d3db00a4fee9ab6b522f97f5582a76e6c4ba4cdf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:44 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842b9a9cd94ed67fe499e26d3db00a4fee9ab6b522f97f5582a76e6c4ba4cdf1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:44 np0005590810 podman[205405]: 2026-01-21 16:20:44.247623902 +0000 UTC m=+0.024514675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:20:44 np0005590810 podman[205405]: 2026-01-21 16:20:44.354072854 +0000 UTC m=+0.130963637 container init b850a4cad2271834d01e4ef2e027fc8f338a3b460a9ee8a1b9f6ef7b3038386a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:20:44 np0005590810 podman[205405]: 2026-01-21 16:20:44.361636454 +0000 UTC m=+0.138527207 container start b850a4cad2271834d01e4ef2e027fc8f338a3b460a9ee8a1b9f6ef7b3038386a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 21 11:20:44 np0005590810 bash[205405]: b850a4cad2271834d01e4ef2e027fc8f338a3b460a9ee8a1b9f6ef7b3038386a
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:44 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:20:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 21 11:20:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:20:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:20:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:20:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:20:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:20:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:20:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:20:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:20:44 np0005590810 python3.9[205553]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:20:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 493 B/s wr, 1 op/s
Jan 21 11:20:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 611 B/s rd, 488 B/s wr, 1 op/s
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:20:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:20:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:20:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:45 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:20:45 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 21 11:20:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:45.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:45 np0005590810 podman[205785]: 2026-01-21 16:20:45.393642598 +0000 UTC m=+0.045692289 container create 5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chatelet, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:20:45 np0005590810 systemd[1]: Started libpod-conmon-5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda.scope.
Jan 21 11:20:45 np0005590810 podman[205785]: 2026-01-21 16:20:45.375085665 +0000 UTC m=+0.027135386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:20:45 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:20:45 np0005590810 podman[205785]: 2026-01-21 16:20:45.500610376 +0000 UTC m=+0.152660087 container init 5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chatelet, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:20:45 np0005590810 podman[205785]: 2026-01-21 16:20:45.513243519 +0000 UTC m=+0.165293210 container start 5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chatelet, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:20:45 np0005590810 python3.9[205784]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769012444.1065826-1641-275376771665858/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:45 np0005590810 podman[205785]: 2026-01-21 16:20:45.517606302 +0000 UTC m=+0.169656013 container attach 5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chatelet, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:20:45 np0005590810 reverent_chatelet[205802]: 167 167
Jan 21 11:20:45 np0005590810 systemd[1]: libpod-5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda.scope: Deactivated successfully.
Jan 21 11:20:45 np0005590810 podman[205785]: 2026-01-21 16:20:45.519939263 +0000 UTC m=+0.171988964 container died 5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:20:45 np0005590810 systemd[1]: var-lib-containers-storage-overlay-10e8a1179c19f24eb960eb9ef5f444a94877d1748ec7598b991506e60da6df05-merged.mount: Deactivated successfully.
Jan 21 11:20:45 np0005590810 podman[205785]: 2026-01-21 16:20:45.561661099 +0000 UTC m=+0.213710790 container remove 5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chatelet, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:20:45 np0005590810 systemd[1]: libpod-conmon-5309e15c35f11666b95bd2f480ea064652736a45edf9ca1d1bf0ae48fdf2edda.scope: Deactivated successfully.
Jan 21 11:20:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:45.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:45] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:20:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:45] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:20:45 np0005590810 podman[205873]: 2026-01-21 16:20:45.74812801 +0000 UTC m=+0.046270235 container create f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 21 11:20:45 np0005590810 systemd[1]: Started libpod-conmon-f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab.scope.
Jan 21 11:20:45 np0005590810 podman[205873]: 2026-01-21 16:20:45.728587337 +0000 UTC m=+0.026729592 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:20:45 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:20:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd614466263bd0c22da07a982c198aa14de3309c1004812ce6c76c4e798efb12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd614466263bd0c22da07a982c198aa14de3309c1004812ce6c76c4e798efb12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd614466263bd0c22da07a982c198aa14de3309c1004812ce6c76c4e798efb12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd614466263bd0c22da07a982c198aa14de3309c1004812ce6c76c4e798efb12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd614466263bd0c22da07a982c198aa14de3309c1004812ce6c76c4e798efb12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:45 np0005590810 podman[205873]: 2026-01-21 16:20:45.856348687 +0000 UTC m=+0.154490942 container init f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bhaskara, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:20:45 np0005590810 podman[205873]: 2026-01-21 16:20:45.863504554 +0000 UTC m=+0.161646779 container start f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:20:45 np0005590810 podman[205873]: 2026-01-21 16:20:45.869986551 +0000 UTC m=+0.168128776 container attach f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:20:46 np0005590810 ceph-mon[74380]: Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 21 11:20:46 np0005590810 kind_bhaskara[205940]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:20:46 np0005590810 kind_bhaskara[205940]: --> All data devices are unavailable
Jan 21 11:20:46 np0005590810 python3.9[205999]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:20:46 np0005590810 systemd[1]: libpod-f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab.scope: Deactivated successfully.
Jan 21 11:20:46 np0005590810 podman[205873]: 2026-01-21 16:20:46.205008763 +0000 UTC m=+0.503150988 container died f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:20:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dd614466263bd0c22da07a982c198aa14de3309c1004812ce6c76c4e798efb12-merged.mount: Deactivated successfully.
Jan 21 11:20:46 np0005590810 podman[205873]: 2026-01-21 16:20:46.252645779 +0000 UTC m=+0.550788004 container remove f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:20:46 np0005590810 systemd[1]: libpod-conmon-f3c6dee3b80edf74913bf8c4db64ea7d1f7e32836d21e01b230fba075779dcab.scope: Deactivated successfully.
Jan 21 11:20:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 977 B/s wr, 3 op/s
Jan 21 11:20:46 np0005590810 python3.9[206194]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769012445.6863155-1641-31397176705169/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:46 np0005590810 podman[206237]: 2026-01-21 16:20:46.834586388 +0000 UTC m=+0.047836204 container create 68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:20:46 np0005590810 systemd[1]: Started libpod-conmon-68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d.scope.
Jan 21 11:20:46 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:20:46 np0005590810 podman[206237]: 2026-01-21 16:20:46.81553442 +0000 UTC m=+0.028784256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:20:46 np0005590810 podman[206237]: 2026-01-21 16:20:46.922937951 +0000 UTC m=+0.136187817 container init 68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_jang, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:20:46 np0005590810 podman[206237]: 2026-01-21 16:20:46.933810651 +0000 UTC m=+0.147060467 container start 68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_jang, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:20:46 np0005590810 podman[206237]: 2026-01-21 16:20:46.938163663 +0000 UTC m=+0.151413489 container attach 68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 11:20:46 np0005590810 fervent_jang[206277]: 167 167
Jan 21 11:20:46 np0005590810 systemd[1]: libpod-68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d.scope: Deactivated successfully.
Jan 21 11:20:46 np0005590810 podman[206237]: 2026-01-21 16:20:46.940672879 +0000 UTC m=+0.153922705 container died 68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:20:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay-0dc998ca697c65806639e4494014232296957f68ec223cc9b13eaf1dae8415aa-merged.mount: Deactivated successfully.
Jan 21 11:20:46 np0005590810 podman[206237]: 2026-01-21 16:20:46.978724515 +0000 UTC m=+0.191974331 container remove 68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Jan 21 11:20:46 np0005590810 systemd[1]: libpod-conmon-68631d6de95b3e262708f848420ddbab693caccad65905fbbcd5f0818fd1834d.scope: Deactivated successfully.
Jan 21 11:20:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:47.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:47 np0005590810 podman[206376]: 2026-01-21 16:20:47.181005507 +0000 UTC m=+0.068141490 container create d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:20:47 np0005590810 systemd[1]: Started libpod-conmon-d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a.scope.
Jan 21 11:20:47 np0005590810 podman[206376]: 2026-01-21 16:20:47.150919263 +0000 UTC m=+0.038055336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:20:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:47.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:47 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:20:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f9dccc145f234099fe52d8ace73ad032c2007893fddd894464d363f252508e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f9dccc145f234099fe52d8ace73ad032c2007893fddd894464d363f252508e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f9dccc145f234099fe52d8ace73ad032c2007893fddd894464d363f252508e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f9dccc145f234099fe52d8ace73ad032c2007893fddd894464d363f252508e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:47 np0005590810 podman[206376]: 2026-01-21 16:20:47.296770811 +0000 UTC m=+0.183906844 container init d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:20:47 np0005590810 podman[206376]: 2026-01-21 16:20:47.309600421 +0000 UTC m=+0.196736444 container start d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:20:47 np0005590810 podman[206376]: 2026-01-21 16:20:47.313737646 +0000 UTC m=+0.200873669 container attach d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 21 11:20:47 np0005590810 python3.9[206450]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:20:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:47.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]: {
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:    "0": [
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:        {
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "devices": [
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "/dev/loop3"
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            ],
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "lv_name": "ceph_lv0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "lv_size": "21470642176",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "name": "ceph_lv0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "tags": {
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.cluster_name": "ceph",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.crush_device_class": "",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.encrypted": "0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.osd_id": "0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.type": "block",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.vdo": "0",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:                "ceph.with_tpm": "0"
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            },
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "type": "block",
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:            "vg_name": "ceph_vg0"
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:        }
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]:    ]
Jan 21 11:20:47 np0005590810 crazy_engelbart[206417]: }
Jan 21 11:20:47 np0005590810 systemd[1]: libpod-d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a.scope: Deactivated successfully.
Jan 21 11:20:47 np0005590810 podman[206376]: 2026-01-21 16:20:47.63300967 +0000 UTC m=+0.520145663 container died d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 11:20:47 np0005590810 systemd[1]: var-lib-containers-storage-overlay-89f9dccc145f234099fe52d8ace73ad032c2007893fddd894464d363f252508e-merged.mount: Deactivated successfully.
Jan 21 11:20:47 np0005590810 podman[206376]: 2026-01-21 16:20:47.698675974 +0000 UTC m=+0.585811957 container remove d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:20:47 np0005590810 systemd[1]: libpod-conmon-d89eb06f5a43a69c5a8558964eeb5dadc6ab4726dd1542c773f755ac278b0b9a.scope: Deactivated successfully.
Jan 21 11:20:48 np0005590810 python3.9[206639]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769012446.937971-1641-213170246202011/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:48 np0005590810 podman[206691]: 2026-01-21 16:20:48.272863908 +0000 UTC m=+0.044644077 container create 5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_gagarin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:20:48 np0005590810 systemd[1]: Started libpod-conmon-5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8.scope.
Jan 21 11:20:48 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:20:48 np0005590810 podman[206691]: 2026-01-21 16:20:48.25282095 +0000 UTC m=+0.024601119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:20:48 np0005590810 podman[206691]: 2026-01-21 16:20:48.354088864 +0000 UTC m=+0.125869053 container init 5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:20:48 np0005590810 podman[206691]: 2026-01-21 16:20:48.361827179 +0000 UTC m=+0.133607348 container start 5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:20:48 np0005590810 podman[206691]: 2026-01-21 16:20:48.365138 +0000 UTC m=+0.136918189 container attach 5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_gagarin, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:20:48 np0005590810 affectionate_gagarin[206742]: 167 167
Jan 21 11:20:48 np0005590810 systemd[1]: libpod-5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8.scope: Deactivated successfully.
Jan 21 11:20:48 np0005590810 podman[206691]: 2026-01-21 16:20:48.368586744 +0000 UTC m=+0.140366913 container died 5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_gagarin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 21 11:20:48 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3783b84b9ad86bba7ce0d38226e3a9a4c05844d72b89a6d133f5183d2ec96458-merged.mount: Deactivated successfully.
Jan 21 11:20:48 np0005590810 podman[206691]: 2026-01-21 16:20:48.402680289 +0000 UTC m=+0.174460458 container remove 5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:20:48 np0005590810 systemd[1]: libpod-conmon-5b75b0b2d944a816c2c90e2e8215222717d49828701d49ca15e4c15aba009fb8.scope: Deactivated successfully.
Jan 21 11:20:48 np0005590810 podman[206838]: 2026-01-21 16:20:48.55550649 +0000 UTC m=+0.040361977 container create 16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 21 11:20:48 np0005590810 systemd[1]: Started libpod-conmon-16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e.scope.
Jan 21 11:20:48 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:20:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c10b8def62a726ccc39e76eb3e4ba394dd0593d679a200037de627f42aa6f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c10b8def62a726ccc39e76eb3e4ba394dd0593d679a200037de627f42aa6f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:48 np0005590810 podman[206838]: 2026-01-21 16:20:48.538888685 +0000 UTC m=+0.023744192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:20:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c10b8def62a726ccc39e76eb3e4ba394dd0593d679a200037de627f42aa6f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c10b8def62a726ccc39e76eb3e4ba394dd0593d679a200037de627f42aa6f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:20:48 np0005590810 podman[206838]: 2026-01-21 16:20:48.64874575 +0000 UTC m=+0.133601287 container init 16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 21 11:20:48 np0005590810 podman[206838]: 2026-01-21 16:20:48.655696682 +0000 UTC m=+0.140552179 container start 16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:20:48 np0005590810 podman[206838]: 2026-01-21 16:20:48.658802776 +0000 UTC m=+0.143658453 container attach 16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:20:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 855 B/s wr, 3 op/s
Jan 21 11:20:48 np0005590810 python3.9[206887]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:20:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:48.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:49.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:49 np0005590810 lvm[207087]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:20:49 np0005590810 lvm[207087]: VG ceph_vg0 finished
Jan 21 11:20:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:49 np0005590810 suspicious_chatterjee[206885]: {}
Jan 21 11:20:49 np0005590810 python3.9[207082]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769012448.3179736-1641-242847474698419/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:49 np0005590810 systemd[1]: libpod-16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e.scope: Deactivated successfully.
Jan 21 11:20:49 np0005590810 systemd[1]: libpod-16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e.scope: Consumed 1.250s CPU time.
Jan 21 11:20:49 np0005590810 podman[206838]: 2026-01-21 16:20:49.46191852 +0000 UTC m=+0.946774007 container died 16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:20:49 np0005590810 systemd[1]: var-lib-containers-storage-overlay-52c10b8def62a726ccc39e76eb3e4ba394dd0593d679a200037de627f42aa6f9-merged.mount: Deactivated successfully.
Jan 21 11:20:49 np0005590810 podman[206838]: 2026-01-21 16:20:49.517462037 +0000 UTC m=+1.002317534 container remove 16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:20:49 np0005590810 systemd[1]: libpod-conmon-16877b86f74d346f4a9f0fa0aca391c89f03350570fc5236c254bdee5a96346e.scope: Deactivated successfully.
Jan 21 11:20:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:20:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:49.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:20:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:50 np0005590810 python3.9[207254]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:20:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:20:50 np0005590810 python3.9[207404]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769012449.6325452-1641-13633012156416/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 855 B/s wr, 3 op/s
Jan 21 11:20:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:51 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 21 11:20:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:51 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 21 11:20:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:51 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:20:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:51 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:20:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:51 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:20:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:51.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:51 np0005590810 python3.9[207558]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:20:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:51.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:52 np0005590810 python3.9[207683]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769012450.9232996-1641-76807328882922/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:20:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:20:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:20:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:20:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:20:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:20:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:20:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 855 B/s wr, 3 op/s
Jan 21 11:20:52 np0005590810 python3.9[207835]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:20:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:53.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:53 np0005590810 python3.9[207960]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769012452.2301354-1641-255888658782625/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:53.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:54 np0005590810 python3.9[208112]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:20:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:20:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:20:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:54 np0005590810 python3.9[208237]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769012453.615333-1641-43426608402982/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 716 B/s wr, 2 op/s
Jan 21 11:20:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:55] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 21 11:20:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:20:55] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 21 11:20:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:55.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:55 np0005590810 podman[208264]: 2026-01-21 16:20:55.688081571 +0000 UTC m=+0.060429887 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 21 11:20:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162055 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:20:56 np0005590810 python3.9[208410]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 21 11:20:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 21 11:20:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:57.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:20:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:57.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:20:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:57.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:57 np0005590810 python3.9[208565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001a:nfs.cephfs.2: -2
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:20:58 np0005590810 python3.9[208742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Jan 21 11:20:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:20:58.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:20:59 np0005590810 python3.9[208907]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:20:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:20:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:20:59 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:20:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:20:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:20:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:20:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:20:59.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:20:59 np0005590810 python3.9[209063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:00 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37900016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:00 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:00 np0005590810 python3.9[209215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:21:01 np0005590810 python3.9[209368]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:21:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:01.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:21:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:01 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:01.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:01 np0005590810 python3.9[209521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162102 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:21:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:02 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:02 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:02 np0005590810 python3.9[209673]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:21:03 np0005590810 python3.9[209826]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:03.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:03 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.003000092s ======
Jan 21 11:21:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:03.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000092s
Jan 21 11:21:03 np0005590810 python3.9[209979]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:04 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:04 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:04 np0005590810 python3.9[210131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:21:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:05.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:05 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:05 np0005590810 python3.9[210284]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:05] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 21 11:21:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:05] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 21 11:21:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:05.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:05 np0005590810 python3.9[210437]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:06 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:06 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:06 np0005590810 python3.9[210589]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:21:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:07.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:07.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:07 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:07.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:08 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:08 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:08 np0005590810 podman[210715]: 2026-01-21 16:21:08.519377559 +0000 UTC m=+0.114599091 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 21 11:21:08 np0005590810 python3.9[210763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Jan 21 11:21:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:08.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:09 np0005590810 python3.9[210893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012467.9782443-2304-131228296737683/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:21:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:09.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:09 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:21:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.402395) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012469402434, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4012, "num_deletes": 501, "total_data_size": 8116341, "memory_usage": 8245192, "flush_reason": "Manual Compaction"}
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012469436331, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4568121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13142, "largest_seqno": 17153, "table_properties": {"data_size": 4556322, "index_size": 6693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4037, "raw_key_size": 31766, "raw_average_key_size": 19, "raw_value_size": 4528555, "raw_average_value_size": 2848, "num_data_blocks": 293, "num_entries": 1590, "num_filter_entries": 1590, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769012039, "oldest_key_time": 1769012039, "file_creation_time": 1769012469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 33981 microseconds, and 9525 cpu microseconds.
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.436380) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4568121 bytes OK
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.436402) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.441683) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.441700) EVENT_LOG_v1 {"time_micros": 1769012469441695, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.441726) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8100188, prev total WAL file size 8100188, number of live WAL files 2.
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.444287) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4461KB)], [32(11MB)]
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012469444332, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 16168669, "oldest_snapshot_seqno": -1}
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4947 keys, 11787654 bytes, temperature: kUnknown
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012469521900, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 11787654, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11752789, "index_size": 21379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 123701, "raw_average_key_size": 25, "raw_value_size": 11661503, "raw_average_value_size": 2357, "num_data_blocks": 895, "num_entries": 4947, "num_filter_entries": 4947, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769012469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.522276) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 11787654 bytes
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.523744) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.1 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.4, 11.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.6) OK, records in: 5765, records dropped: 818 output_compression: NoCompression
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.523779) EVENT_LOG_v1 {"time_micros": 1769012469523761, "job": 14, "event": "compaction_finished", "compaction_time_micros": 77691, "compaction_time_cpu_micros": 27456, "output_level": 6, "num_output_files": 1, "total_output_size": 11787654, "num_input_records": 5765, "num_output_records": 4947, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012469524849, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012469527117, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.444190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.527288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.527299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.527303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.527306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:09 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:09.527310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:09.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:21:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:21:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:21:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:21:09 np0005590810 python3.9[211046]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:10 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:10 np0005590810 python3.9[211169]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012469.3603714-2304-227580399002446/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:10 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Jan 21 11:21:10 np0005590810 python3.9[211322]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:21:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:11.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:21:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:11 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:11 np0005590810 python3.9[211446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012470.5487838-2304-71237439624689/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:11.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:12 np0005590810 python3.9[211598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:12 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:12 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:21:12 np0005590810 python3.9[211721]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012471.7255282-2304-263156214098015/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:13 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:13.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:13 np0005590810 python3.9[211875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:13.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:14 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:14 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:14 np0005590810 python3.9[211998]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012472.9772494-2304-87681188188196/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:21:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:15.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:15 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:15 np0005590810 python3.9[212151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:15] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:21:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:15] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:21:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:21:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:15.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:21:15 np0005590810 python3.9[212275]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012474.76516-2304-165904372031837/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:16 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:16 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:16 np0005590810 python3.9[212427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:21:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:17.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:17 np0005590810 python3.9[212551]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012476.0565162-2304-134250797403151/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:17 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:17.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:17 np0005590810 python3.9[212704]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:18 np0005590810 python3.9[212827]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012477.2421756-2304-40016894668665/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:18 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:18 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:21:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:18.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:18 np0005590810 python3.9[213005]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:19.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:19 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:19 np0005590810 python3.9[213129]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012478.4818974-2304-19450669809229/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:21:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:19.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:21:20 np0005590810 python3.9[213281]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:20 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:20 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 21 11:21:21 np0005590810 python3.9[213405]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012479.7160885-2304-69674076093187/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:21 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:21:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:21:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:21.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:21 np0005590810 python3.9[213558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:21:22.009 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:21:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:21:22.010 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:21:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:21:22.010 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:21:22 np0005590810 python3.9[213681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012481.1814601-2304-10998354002523/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:22 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:22 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:21:22 np0005590810 python3.9[213834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:23 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:23 np0005590810 python3.9[213958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012482.472475-2304-268876756984744/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:23.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:21:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:21:24 np0005590810 python3.9[214110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:24 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:24 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:21:24 np0005590810 python3.9[214233]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012483.705795-2304-12701334409053/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:25 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:25.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:25 np0005590810 python3.9[214387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:25] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:21:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:25] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:21:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:25.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:25 np0005590810 podman[214482]: 2026-01-21 16:21:25.976401275 +0000 UTC m=+0.071587575 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 21 11:21:26 np0005590810 python3.9[214525]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012484.9423018-2304-277456991239780/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:26 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:26 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:21:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:27.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:27 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:27.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:28 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:28 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:28 np0005590810 python3.9[214681]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:21:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:21:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:28.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:29 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:29.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:29 np0005590810 python3.9[214838]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 21 11:21:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:30 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:30 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:21:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:31 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37980014c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:31.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:31.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162131 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:21:32 np0005590810 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 21 11:21:32 np0005590810 python3.9[214998]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:32 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:32 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784003480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:21:32 np0005590810 python3.9[215150]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:33 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:21:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:33.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.402947) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012493403002, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 447, "num_deletes": 251, "total_data_size": 464522, "memory_usage": 472544, "flush_reason": "Manual Compaction"}
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012493409627, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 457391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17154, "largest_seqno": 17600, "table_properties": {"data_size": 454824, "index_size": 667, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6124, "raw_average_key_size": 18, "raw_value_size": 449755, "raw_average_value_size": 1371, "num_data_blocks": 31, "num_entries": 328, "num_filter_entries": 328, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769012469, "oldest_key_time": 1769012469, "file_creation_time": 1769012493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 6716 microseconds, and 2637 cpu microseconds.
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.409667) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 457391 bytes OK
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.409686) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.411697) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.411717) EVENT_LOG_v1 {"time_micros": 1769012493411710, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.411735) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 461850, prev total WAL file size 461850, number of live WAL files 2.
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.412352) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(446KB)], [35(11MB)]
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012493412449, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 12245045, "oldest_snapshot_seqno": -1}
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4765 keys, 10115320 bytes, temperature: kUnknown
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012493481706, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 10115320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10083000, "index_size": 19282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 120571, "raw_average_key_size": 25, "raw_value_size": 9996123, "raw_average_value_size": 2097, "num_data_blocks": 803, "num_entries": 4765, "num_filter_entries": 4765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769012493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.481986) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 10115320 bytes
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.483717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.6 rd, 145.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 11.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(48.9) write-amplify(22.1) OK, records in: 5275, records dropped: 510 output_compression: NoCompression
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.483747) EVENT_LOG_v1 {"time_micros": 1769012493483734, "job": 16, "event": "compaction_finished", "compaction_time_micros": 69334, "compaction_time_cpu_micros": 23897, "output_level": 6, "num_output_files": 1, "total_output_size": 10115320, "num_input_records": 5275, "num_output_records": 4765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012493484019, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012493486208, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.412126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.486314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.486323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.486326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.486329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:21:33.486332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:21:33 np0005590810 python3.9[215304]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:33.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:34 np0005590810 python3.9[215456]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:34 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:34 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:21:35 np0005590810 python3.9[215609]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:35 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784003480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:35.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:35] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:21:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:35] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:21:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:35.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:36 np0005590810 python3.9[215762]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:36 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:36 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:21:36 np0005590810 python3.9[215914]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:37.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:21:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:37.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:21:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:37 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:37.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:37 np0005590810 python3.9[216068]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:37.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:38 np0005590810 python3.9[216220]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:38 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784003480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:38 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:38 np0005590810 podman[216369]: 2026-01-21 16:21:38.707381547 +0000 UTC m=+0.106315429 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 21 11:21:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:21:38 np0005590810 python3.9[216411]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:21:39
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:21:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:21:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:21:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:39 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:21:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:39.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:21:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:21:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:39.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:40 np0005590810 python3.9[216575]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:21:40 np0005590810 systemd[1]: Reloading.
Jan 21 11:21:40 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:21:40 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:21:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:40 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:40 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798002cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:40 np0005590810 systemd[1]: Starting libvirt logging daemon socket...
Jan 21 11:21:40 np0005590810 systemd[1]: Listening on libvirt logging daemon socket.
Jan 21 11:21:40 np0005590810 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 21 11:21:40 np0005590810 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 21 11:21:40 np0005590810 systemd[1]: Starting libvirt logging daemon...
Jan 21 11:21:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:21:40 np0005590810 systemd[1]: Started libvirt logging daemon.
Jan 21 11:21:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:41 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:41.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:41 np0005590810 python3.9[216772]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:21:41 np0005590810 systemd[1]: Reloading.
Jan 21 11:21:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:41 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:21:41 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:21:41 np0005590810 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 21 11:21:42 np0005590810 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 21 11:21:42 np0005590810 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 21 11:21:42 np0005590810 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 21 11:21:42 np0005590810 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 21 11:21:42 np0005590810 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 21 11:21:42 np0005590810 systemd[1]: Starting libvirt nodedev daemon...
Jan 21 11:21:42 np0005590810 systemd[1]: Started libvirt nodedev daemon.
Jan 21 11:21:42 np0005590810 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 21 11:21:42 np0005590810 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 21 11:21:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:42 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:21:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:42 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784003480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:42 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:42 np0005590810 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 21 11:21:42 np0005590810 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 21 11:21:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:21:42 np0005590810 python3.9[216993]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:21:42 np0005590810 systemd[1]: Reloading.
Jan 21 11:21:42 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:21:42 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:21:43 np0005590810 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 21 11:21:43 np0005590810 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 21 11:21:43 np0005590810 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 21 11:21:43 np0005590810 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 21 11:21:43 np0005590810 systemd[1]: Starting libvirt proxy daemon...
Jan 21 11:21:43 np0005590810 systemd[1]: Started libvirt proxy daemon.
Jan 21 11:21:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:43 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:43.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:43 np0005590810 setroubleshoot[216836]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3f98b7fd-6da8-4687-b5fc-38a4274e9c9d
Jan 21 11:21:43 np0005590810 setroubleshoot[216836]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 21 11:21:43 np0005590810 setroubleshoot[216836]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3f98b7fd-6da8-4687-b5fc-38a4274e9c9d
Jan 21 11:21:43 np0005590810 setroubleshoot[216836]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 21 11:21:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:43.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:44 np0005590810 python3.9[217213]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:21:44 np0005590810 systemd[1]: Reloading.
Jan 21 11:21:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:44 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784003480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:44 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:21:44 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:21:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:21:44 np0005590810 systemd[1]: Listening on libvirt locking daemon socket.
Jan 21 11:21:44 np0005590810 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 21 11:21:44 np0005590810 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 21 11:21:44 np0005590810 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 21 11:21:44 np0005590810 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 21 11:21:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:44 np0005590810 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 21 11:21:44 np0005590810 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 21 11:21:44 np0005590810 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 21 11:21:44 np0005590810 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 21 11:21:44 np0005590810 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 21 11:21:44 np0005590810 systemd[1]: Starting libvirt QEMU daemon...
Jan 21 11:21:44 np0005590810 systemd[1]: Started libvirt QEMU daemon.
Jan 21 11:21:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:45 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:45.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:45] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:21:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:45] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:21:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:45 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:21:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:45 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:21:45 np0005590810 python3.9[217430]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:21:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:45.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:45 np0005590810 systemd[1]: Reloading.
Jan 21 11:21:45 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:21:45 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:21:46 np0005590810 systemd[1]: Starting libvirt secret daemon socket...
Jan 21 11:21:46 np0005590810 systemd[1]: Listening on libvirt secret daemon socket.
Jan 21 11:21:46 np0005590810 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 21 11:21:46 np0005590810 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 21 11:21:46 np0005590810 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 21 11:21:46 np0005590810 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 21 11:21:46 np0005590810 systemd[1]: Starting libvirt secret daemon...
Jan 21 11:21:46 np0005590810 systemd[1]: Started libvirt secret daemon.
Jan 21 11:21:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:46 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:46 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Jan 21 11:21:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:47.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:21:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:47.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:47 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:47.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:47.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:47 np0005590810 python3.9[217643]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:48 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:48 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:48 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:21:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Jan 21 11:21:48 np0005590810 python3.9[217795]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 11:21:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:48.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:49 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:49.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:21:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:49.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:21:49 np0005590810 python3.9[217949]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:21:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:50 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:50 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:50 np0005590810 python3.9[218131]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 11:21:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:21:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:21:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:51 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 11:21:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:51.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:21:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:21:51 np0005590810 python3.9[218334]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:51 np0005590810 podman[218520]: 2026-01-21 16:21:51.976171418 +0000 UTC m=+0.049296778 container create 2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:21:52 np0005590810 systemd[1]: Started libpod-conmon-2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9.scope.
Jan 21 11:21:52 np0005590810 podman[218520]: 2026-01-21 16:21:51.953854731 +0000 UTC m=+0.026980121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:21:52 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:21:52 np0005590810 podman[218520]: 2026-01-21 16:21:52.076599747 +0000 UTC m=+0.149725107 container init 2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:21:52 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:21:52 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:21:52 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:21:52 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:21:52 np0005590810 podman[218520]: 2026-01-21 16:21:52.085517288 +0000 UTC m=+0.158642648 container start 2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:21:52 np0005590810 podman[218520]: 2026-01-21 16:21:52.089403756 +0000 UTC m=+0.162529116 container attach 2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:21:52 np0005590810 hardcore_austin[218564]: 167 167
Jan 21 11:21:52 np0005590810 podman[218520]: 2026-01-21 16:21:52.094765768 +0000 UTC m=+0.167891128 container died 2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:21:52 np0005590810 systemd[1]: libpod-2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9.scope: Deactivated successfully.
Jan 21 11:21:52 np0005590810 systemd[1]: var-lib-containers-storage-overlay-1989e38d000609160296c7a975eb7e0cb1da3c7f34683a3e1eadeeca9b092db6-merged.mount: Deactivated successfully.
Jan 21 11:21:52 np0005590810 podman[218520]: 2026-01-21 16:21:52.157527764 +0000 UTC m=+0.230653114 container remove 2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_austin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:21:52 np0005590810 systemd[1]: libpod-conmon-2608eb9d873b26806c44a31f061f532f5902bafbf834478e3672d3f76a1f36e9.scope: Deactivated successfully.
Jan 21 11:21:52 np0005590810 python3.9[218560]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012511.0959413-3378-222976451420350/.source.xml follow=False _original_basename=secret.xml.j2 checksum=4fd717201a1d429c4f96ff7910daf76b983152cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:52 np0005590810 podman[218607]: 2026-01-21 16:21:52.347480322 +0000 UTC m=+0.052392112 container create f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lichterman, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:21:52 np0005590810 systemd[1]: Started libpod-conmon-f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9.scope.
Jan 21 11:21:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:52 np0005590810 podman[218607]: 2026-01-21 16:21:52.325096182 +0000 UTC m=+0.030008022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:21:52 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:21:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae05ccbf3daf90a0160bdc0c6a8987bb428ae14feff49f7a52cf51526ec2b307/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae05ccbf3daf90a0160bdc0c6a8987bb428ae14feff49f7a52cf51526ec2b307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae05ccbf3daf90a0160bdc0c6a8987bb428ae14feff49f7a52cf51526ec2b307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae05ccbf3daf90a0160bdc0c6a8987bb428ae14feff49f7a52cf51526ec2b307/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae05ccbf3daf90a0160bdc0c6a8987bb428ae14feff49f7a52cf51526ec2b307/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:52 np0005590810 podman[218607]: 2026-01-21 16:21:52.448945983 +0000 UTC m=+0.153857773 container init f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lichterman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:21:52 np0005590810 podman[218607]: 2026-01-21 16:21:52.457819192 +0000 UTC m=+0.162730982 container start f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:21:52 np0005590810 podman[218607]: 2026-01-21 16:21:52.461815073 +0000 UTC m=+0.166726963 container attach f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lichterman, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:21:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:52 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:52 np0005590810 condescending_lichterman[218630]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:21:52 np0005590810 condescending_lichterman[218630]: --> All data devices are unavailable
Jan 21 11:21:52 np0005590810 systemd[1]: libpod-f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9.scope: Deactivated successfully.
Jan 21 11:21:52 np0005590810 podman[218607]: 2026-01-21 16:21:52.829465886 +0000 UTC m=+0.534377676 container died f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lichterman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:21:52 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ae05ccbf3daf90a0160bdc0c6a8987bb428ae14feff49f7a52cf51526ec2b307-merged.mount: Deactivated successfully.
Jan 21 11:21:52 np0005590810 podman[218607]: 2026-01-21 16:21:52.881566188 +0000 UTC m=+0.586477978 container remove f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lichterman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:21:52 np0005590810 systemd[1]: libpod-conmon-f968d9e3dbb4057099d9f2e7c7c142fd8d3d9f43c399a98a0ceb321b3c1209c9.scope: Deactivated successfully.
Jan 21 11:21:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 21 11:21:53 np0005590810 python3.9[218834]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine d9745984-fea8-5195-8ec5-61f685b5c785#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:21:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:53 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:21:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:53.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:21:53 np0005590810 podman[218887]: 2026-01-21 16:21:53.529700977 +0000 UTC m=+0.041385387 container create 9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bartik, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:21:53 np0005590810 systemd[1]: Started libpod-conmon-9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225.scope.
Jan 21 11:21:53 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:21:53 np0005590810 podman[218887]: 2026-01-21 16:21:53.513840746 +0000 UTC m=+0.025525176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:21:53 np0005590810 podman[218887]: 2026-01-21 16:21:53.619181364 +0000 UTC m=+0.130865794 container init 9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 21 11:21:53 np0005590810 podman[218887]: 2026-01-21 16:21:53.63024619 +0000 UTC m=+0.141930600 container start 9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bartik, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:21:53 np0005590810 great_bartik[218928]: 167 167
Jan 21 11:21:53 np0005590810 systemd[1]: libpod-9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225.scope: Deactivated successfully.
Jan 21 11:21:53 np0005590810 conmon[218928]: conmon 9a6bfccc1623c50ad1b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225.scope/container/memory.events
Jan 21 11:21:53 np0005590810 podman[218887]: 2026-01-21 16:21:53.640049558 +0000 UTC m=+0.151733968 container attach 9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bartik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:21:53 np0005590810 podman[218887]: 2026-01-21 16:21:53.640665376 +0000 UTC m=+0.152349786 container died 9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:21:53 np0005590810 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 21 11:21:53 np0005590810 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.029s CPU time.
Jan 21 11:21:53 np0005590810 systemd[1]: var-lib-containers-storage-overlay-0a98148e25fe07f4205f15a3e9656f5be02f7d807e23e768e49ef89cbd00bc57-merged.mount: Deactivated successfully.
Jan 21 11:21:53 np0005590810 podman[218887]: 2026-01-21 16:21:53.687151408 +0000 UTC m=+0.198835818 container remove 9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bartik, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:21:53 np0005590810 systemd[1]: libpod-conmon-9a6bfccc1623c50ad1b1ca86ae182ca5829158d62488d65dfcf09c4821683225.scope: Deactivated successfully.
Jan 21 11:21:53 np0005590810 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 21 11:21:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:53.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:53 np0005590810 podman[219007]: 2026-01-21 16:21:53.865351048 +0000 UTC m=+0.043840292 container create 1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:21:53 np0005590810 systemd[1]: Started libpod-conmon-1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5.scope.
Jan 21 11:21:53 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:21:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162153 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:21:53 np0005590810 podman[219007]: 2026-01-21 16:21:53.847932049 +0000 UTC m=+0.026421333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:21:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3a804316271103cf8e8823261bfc5c941948426a06edb880416b0c0f2deb3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3a804316271103cf8e8823261bfc5c941948426a06edb880416b0c0f2deb3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3a804316271103cf8e8823261bfc5c941948426a06edb880416b0c0f2deb3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3a804316271103cf8e8823261bfc5c941948426a06edb880416b0c0f2deb3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:53 np0005590810 podman[219007]: 2026-01-21 16:21:53.95894328 +0000 UTC m=+0.137432554 container init 1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:21:53 np0005590810 podman[219007]: 2026-01-21 16:21:53.970351977 +0000 UTC m=+0.148841221 container start 1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:21:53 np0005590810 podman[219007]: 2026-01-21 16:21:53.977375429 +0000 UTC m=+0.155864783 container attach 1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:21:54 np0005590810 python3.9[219098]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:21:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]: {
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:    "0": [
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:        {
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "devices": [
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "/dev/loop3"
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            ],
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "lv_name": "ceph_lv0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "lv_size": "21470642176",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "name": "ceph_lv0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "tags": {
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.cluster_name": "ceph",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.crush_device_class": "",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.encrypted": "0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.osd_id": "0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.type": "block",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.vdo": "0",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:                "ceph.with_tpm": "0"
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            },
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "type": "block",
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:            "vg_name": "ceph_vg0"
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:        }
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]:    ]
Jan 21 11:21:54 np0005590810 competent_mestorf[219061]: }
Jan 21 11:21:54 np0005590810 systemd[1]: libpod-1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5.scope: Deactivated successfully.
Jan 21 11:21:54 np0005590810 podman[219007]: 2026-01-21 16:21:54.306972436 +0000 UTC m=+0.485461680 container died 1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:21:54 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ec3a804316271103cf8e8823261bfc5c941948426a06edb880416b0c0f2deb3e-merged.mount: Deactivated successfully.
Jan 21 11:21:54 np0005590810 podman[219007]: 2026-01-21 16:21:54.347634082 +0000 UTC m=+0.526123336 container remove 1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 11:21:54 np0005590810 systemd[1]: libpod-conmon-1c03d618635589c9afdd73aa5b7a7d6a0696989b8626d546a9f561bc7dacc3a5.scope: Deactivated successfully.
Jan 21 11:21:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:54 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:54 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:21:54 np0005590810 podman[219358]: 2026-01-21 16:21:54.964609054 +0000 UTC m=+0.044219534 container create 9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:21:55 np0005590810 systemd[1]: Started libpod-conmon-9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1.scope.
Jan 21 11:21:55 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:21:55 np0005590810 podman[219358]: 2026-01-21 16:21:54.944473403 +0000 UTC m=+0.024083913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:21:55 np0005590810 podman[219358]: 2026-01-21 16:21:55.049944625 +0000 UTC m=+0.129555135 container init 9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:21:55 np0005590810 podman[219358]: 2026-01-21 16:21:55.059593418 +0000 UTC m=+0.139203898 container start 9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_leakey, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:21:55 np0005590810 podman[219358]: 2026-01-21 16:21:55.063713773 +0000 UTC m=+0.143324323 container attach 9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_leakey, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:21:55 np0005590810 keen_leakey[219398]: 167 167
Jan 21 11:21:55 np0005590810 systemd[1]: libpod-9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1.scope: Deactivated successfully.
Jan 21 11:21:55 np0005590810 conmon[219398]: conmon 9d3849f8a7a0dfebec56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1.scope/container/memory.events
Jan 21 11:21:55 np0005590810 podman[219358]: 2026-01-21 16:21:55.068394535 +0000 UTC m=+0.148005015 container died 9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_leakey, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:21:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-41a73045dcb8d046adc48a712f6a62d5ecf1534f38faa97703d63b63d01b2eae-merged.mount: Deactivated successfully.
Jan 21 11:21:55 np0005590810 podman[219358]: 2026-01-21 16:21:55.110504764 +0000 UTC m=+0.190115244 container remove 9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_leakey, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:21:55 np0005590810 systemd[1]: libpod-conmon-9d3849f8a7a0dfebec56c0fbd2c991c09738ad440dd829f47c03f4d66445aec1.scope: Deactivated successfully.
Jan 21 11:21:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 21 11:21:55 np0005590810 podman[219446]: 2026-01-21 16:21:55.326823072 +0000 UTC m=+0.052220316 container create 8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:21:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:55 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:55.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:55 np0005590810 systemd[1]: Started libpod-conmon-8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe.scope.
Jan 21 11:21:55 np0005590810 podman[219446]: 2026-01-21 16:21:55.307811635 +0000 UTC m=+0.033208899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:21:55 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:21:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e032ecadcb59f34cb546461fd3c5e6371a860a1d2a5a46bf390208095f46b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e032ecadcb59f34cb546461fd3c5e6371a860a1d2a5a46bf390208095f46b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e032ecadcb59f34cb546461fd3c5e6371a860a1d2a5a46bf390208095f46b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:55 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e032ecadcb59f34cb546461fd3c5e6371a860a1d2a5a46bf390208095f46b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:21:55 np0005590810 podman[219446]: 2026-01-21 16:21:55.427129917 +0000 UTC m=+0.152527191 container init 8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:21:55 np0005590810 podman[219446]: 2026-01-21 16:21:55.434784649 +0000 UTC m=+0.160181893 container start 8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:21:55 np0005590810 podman[219446]: 2026-01-21 16:21:55.438853453 +0000 UTC m=+0.164250697 container attach 8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:21:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:55] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:21:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:21:55] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:21:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.002000061s ======
Jan 21 11:21:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:55.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000061s
Jan 21 11:21:56 np0005590810 lvm[219732]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:21:56 np0005590810 lvm[219732]: VG ceph_vg0 finished
Jan 21 11:21:56 np0005590810 keen_perlman[219497]: {}
Jan 21 11:21:56 np0005590810 podman[219707]: 2026-01-21 16:21:56.199086055 +0000 UTC m=+0.065227851 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 21 11:21:56 np0005590810 systemd[1]: libpod-8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe.scope: Deactivated successfully.
Jan 21 11:21:56 np0005590810 systemd[1]: libpod-8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe.scope: Consumed 1.318s CPU time.
Jan 21 11:21:56 np0005590810 podman[219772]: 2026-01-21 16:21:56.274077853 +0000 UTC m=+0.024720022 container died 8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:21:56 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a6e032ecadcb59f34cb546461fd3c5e6371a860a1d2a5a46bf390208095f46b9-merged.mount: Deactivated successfully.
Jan 21 11:21:56 np0005590810 podman[219772]: 2026-01-21 16:21:56.321357558 +0000 UTC m=+0.071999717 container remove 8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:21:56 np0005590810 systemd[1]: libpod-conmon-8225de22cb7ff8b81b1e83f039b0a6848072e1539ab06a541982ead5efbd5bbe.scope: Deactivated successfully.
Jan 21 11:21:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:21:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:21:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:21:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:56 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:56 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:21:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:56 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:56 np0005590810 python3.9[219836]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:57.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 686 B/s rd, 196 B/s wr, 1 op/s
Jan 21 11:21:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:57 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:57 np0005590810 python3.9[220015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:57 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:21:57 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:21:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:57.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:57 np0005590810 python3.9[220138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012516.8472438-3543-233279388503082/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:58 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:21:58.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:21:58 np0005590810 python3.9[220315]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:21:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 686 B/s rd, 196 B/s wr, 1 op/s
Jan 21 11:21:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:21:59 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:21:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:21:59.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:59 np0005590810 python3.9[220469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:21:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:21:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:21:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:21:59.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:21:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:00 np0005590810 python3.9[220547]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:00 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:00 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:00 np0005590810 python3.9[220700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 294 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:22:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:01 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:01.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:01 np0005590810 python3.9[220779]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xo86qt2y recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:01.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:02 np0005590810 python3.9[220931]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:02 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3784004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:02 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:02 np0005590810 python3.9[221011]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:22:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:03 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:03.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:03 np0005590810 python3.9[221165]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:22:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:03.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:04 np0005590810 python3[221318]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 11:22:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:04 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:04 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:22:05 np0005590810 python3.9[221471]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:05 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:05.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:05] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:22:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:05] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:22:05 np0005590810 python3.9[221550]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:05.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:06 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:06 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:06 np0005590810 python3.9[221702]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:07.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:22:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:07 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:07.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:07 np0005590810 python3.9[221829]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012526.2300904-3810-109536145175434/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:07.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:08 np0005590810 python3.9[221981]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:08 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:08 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:08 np0005590810 python3.9[222059]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:08.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:22:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:22:09 np0005590810 podman[222185]: 2026-01-21 16:22:09.312205561 +0000 UTC m=+0.098768339 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 11:22:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:22:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:22:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:09 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:09 np0005590810 python3.9[222231]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:22:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:22:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:22:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:22:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:09.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:09 np0005590810 python3.9[222317]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:10 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:10 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:10 np0005590810 python3.9[222469]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:22:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:11 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:11.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:11 np0005590810 python3.9[222596]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769012530.3155518-3927-221923385768876/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:11.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:12 np0005590810 python3.9[222748]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:12 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:12 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:13 np0005590810 python3.9[222901]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:22:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:13 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:13.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:13.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:14 np0005590810 python3.9[223057]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:14 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:14 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:15 np0005590810 python3.9[223210]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:22:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:15 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:15.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Jan 21 11:22:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:15] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Jan 21 11:22:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:15.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:16 np0005590810 python3.9[223364]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:22:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:16 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:16 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:17.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:17 np0005590810 python3.9[223519]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:22:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:22:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:17 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:17.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:18 np0005590810 python3.9[223675]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:18 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:18 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:18 np0005590810 python3.9[223852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:22:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:22:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:18.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:19 np0005590810 python3.9[223977]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012538.2874424-4143-36158506519790/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:19 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:19.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:19.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:20 np0005590810 python3.9[224129]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:20 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:20 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:20 np0005590810 python3.9[224252]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012539.778031-4188-276372182299641/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:22:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:21 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:21.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:21 np0005590810 python3.9[224406]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:21.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:22:22.009 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:22:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:22:22.010 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:22:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:22:22.010 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:22:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:22 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:22 np0005590810 python3.9[224529]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012541.4015338-4233-263954047459540/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:22 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:23 np0005590810 python3.9[224682]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:22:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:23 np0005590810 systemd[1]: Reloading.
Jan 21 11:22:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:23 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:23 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:22:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:23.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:23 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:22:23 np0005590810 systemd[1]: Reached target edpm_libvirt.target.
Jan 21 11:22:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:23.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:22:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:22:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:24 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:24 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:24 np0005590810 python3.9[224874]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 11:22:24 np0005590810 systemd[1]: Reloading.
Jan 21 11:22:24 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:22:24 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:22:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:25 np0005590810 systemd[1]: Reloading.
Jan 21 11:22:25 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:22:25 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:22:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:25 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:25] "GET /metrics HTTP/1.1" 200 48351 "" "Prometheus/2.51.0"
Jan 21 11:22:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:25] "GET /metrics HTTP/1.1" 200 48351 "" "Prometheus/2.51.0"
Jan 21 11:22:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:25 np0005590810 systemd[1]: session-53.scope: Deactivated successfully.
Jan 21 11:22:25 np0005590810 systemd[1]: session-53.scope: Consumed 3min 39.403s CPU time.
Jan 21 11:22:25 np0005590810 systemd-logind[795]: Session 53 logged out. Waiting for processes to exit.
Jan 21 11:22:25 np0005590810 systemd-logind[795]: Removed session 53.
Jan 21 11:22:25 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:22:25 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:22:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:26 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:26 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:26 np0005590810 podman[224974]: 2026-01-21 16:22:26.682874939 +0000 UTC m=+0.059512759 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:22:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:27.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:22:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:27 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:27.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:28 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3798004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:28 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:28.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:22:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:28.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:22:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:28.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:29 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:29.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:29.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:30 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f377c0029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:30 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:30 np0005590810 systemd-logind[795]: New session 54 of user zuul.
Jan 21 11:22:30 np0005590810 systemd[1]: Started Session 54 of User zuul.
Jan 21 11:22:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:22:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:31 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:31.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:31 np0005590810 python3.9[225153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:22:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:31.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:32 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:32 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3790003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:33 np0005590810 python3.9[225309]: ansible-ansible.builtin.service_facts Invoked
Jan 21 11:22:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:33 np0005590810 network[225327]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 11:22:33 np0005590810 network[225328]: 'network-scripts' will be removed from distribution in near future.
Jan 21 11:22:33 np0005590810 network[225329]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 11:22:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:33 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3788002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:33.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:33.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:34 np0005590810 kernel: ganesha.nfsd[220972]: segfault at 50 ip 00007f38299d932e sp 00007f379f7fd210 error 4 in libntirpc.so.5.8[7f38299be000+2c000] likely on CPU 6 (core 0, socket 6)
Jan 21 11:22:34 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:22:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[205420]: 21/01/2026 16:22:34 : epoch 6970fcdc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3780004230 fd 39 proxy ignored for local
Jan 21 11:22:34 np0005590810 systemd[1]: Started Process Core Dump (PID 225350/UID 0).
Jan 21 11:22:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:35.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:35] "GET /metrics HTTP/1.1" 200 48351 "" "Prometheus/2.51.0"
Jan 21 11:22:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:35] "GET /metrics HTTP/1.1" 200 48351 "" "Prometheus/2.51.0"
Jan 21 11:22:35 np0005590810 systemd-coredump[225351]: Process 205427 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 57:#012#0  0x00007f38299d932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:22:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:35.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:35 np0005590810 systemd[1]: systemd-coredump@8-225350-0.service: Deactivated successfully.
Jan 21 11:22:35 np0005590810 systemd[1]: systemd-coredump@8-225350-0.service: Consumed 1.367s CPU time.
Jan 21 11:22:35 np0005590810 podman[225381]: 2026-01-21 16:22:35.990870985 +0000 UTC m=+0.030276206 container died b850a4cad2271834d01e4ef2e027fc8f338a3b460a9ee8a1b9f6ef7b3038386a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 21 11:22:36 np0005590810 systemd[1]: var-lib-containers-storage-overlay-842b9a9cd94ed67fe499e26d3db00a4fee9ab6b522f97f5582a76e6c4ba4cdf1-merged.mount: Deactivated successfully.
Jan 21 11:22:36 np0005590810 podman[225381]: 2026-01-21 16:22:36.435152939 +0000 UTC m=+0.474558130 container remove b850a4cad2271834d01e4ef2e027fc8f338a3b460a9ee8a1b9f6ef7b3038386a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:22:36 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:22:36 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:22:36 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.757s CPU time.
Jan 21 11:22:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:37.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:22:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:37.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:37.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:38.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:22:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:38.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:22:39
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'images', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'backups']
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:22:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:22:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:39.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:22:39 np0005590810 podman[225511]: 2026-01-21 16:22:39.497312453 +0000 UTC m=+0.130404248 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:22:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:22:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:39.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162240 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:22:40 np0005590810 python3.9[225709]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 11:22:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:22:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:22:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:41.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:22:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:22:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:22:41 np0005590810 python3.9[225795]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:22:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:22:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:43.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:43.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:22:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:45.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:45] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:22:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:45] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:22:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:46 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 9.
Jan 21 11:22:46 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:22:46 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.757s CPU time.
Jan 21 11:22:46 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:47.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:47 np0005590810 podman[225848]: 2026-01-21 16:22:47.104076594 +0000 UTC m=+0.048376382 container create 0c03a276f648a0db622e602d6334f91f296fc3f32d5a575fe345dfd19c21221b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:22:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4179a48b47073a528dec75668db0cc3c4cf18cb0b724fc4ab3f51cd3a0a471ff/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4179a48b47073a528dec75668db0cc3c4cf18cb0b724fc4ab3f51cd3a0a471ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4179a48b47073a528dec75668db0cc3c4cf18cb0b724fc4ab3f51cd3a0a471ff/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4179a48b47073a528dec75668db0cc3c4cf18cb0b724fc4ab3f51cd3a0a471ff/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:47 np0005590810 podman[225848]: 2026-01-21 16:22:47.081047341 +0000 UTC m=+0.025347149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:22:47 np0005590810 podman[225848]: 2026-01-21 16:22:47.176151623 +0000 UTC m=+0.120451401 container init 0c03a276f648a0db622e602d6334f91f296fc3f32d5a575fe345dfd19c21221b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:22:47 np0005590810 podman[225848]: 2026-01-21 16:22:47.181308507 +0000 UTC m=+0.125608295 container start 0c03a276f648a0db622e602d6334f91f296fc3f32d5a575fe345dfd19c21221b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:22:47 np0005590810 bash[225848]: 0c03a276f648a0db622e602d6334f91f296fc3f32d5a575fe345dfd19c21221b
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:22:47 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:22:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:22:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:22:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:47.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:47.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:48 np0005590810 python3.9[226058]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:22:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:48.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:22:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:49.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:49 np0005590810 python3.9[226212]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:22:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:50 np0005590810 python3.9[226365]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:22:51 np0005590810 python3.9[226518]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:22:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 21 11:22:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:22:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4028 writes, 18K keys, 4027 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 4028 writes, 4027 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1455 writes, 5916 keys, 1455 commit groups, 1.0 writes per commit group, ingest: 10.82 MB, 0.02 MB/s#012Interval WAL: 1455 writes, 1455 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     82.8      0.27              0.06         8    0.034       0      0       0.0       0.0#012  L6      1/0    9.65 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    139.7    119.1      0.63              0.18         7    0.090     33K   3680       0.0       0.0#012 Sum      1/0    9.65 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     97.4    108.1      0.90              0.24        15    0.060     33K   3680       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.1    139.2    133.1      0.29              0.09         6    0.048     15K   1844       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    139.7    119.1      0.63              0.18         7    0.090     33K   3680       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     84.1      0.27              0.06         7    0.038       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.0      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.022, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.08 MB/s write, 0.09 GB read, 0.07 MB/s read, 0.9 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e6f7731350#2 capacity: 304.00 MB usage: 4.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(271,4.11 MB,1.35074%) FilterBlock(16,100.92 KB,0.0324199%) IndexBlock(16,190.39 KB,0.0611606%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 11:22:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:51.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:52 np0005590810 python3.9[226672]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:22:52 np0005590810 python3.9[226795]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012571.5015543-240-218813267389086/.source.iscsi _original_basename=.38h4ydgm follow=False checksum=f9ce0e808740e50d0fcf4facfc8101f9f31fa285 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 21 11:22:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:53 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:22:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:53 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:22:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:53.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:53 np0005590810 python3.9[226949]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:22:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:22:54 np0005590810 python3.9[227101]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:22:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 21 11:22:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:55.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:55] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:22:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:22:55] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:22:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:55.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:55 np0005590810 python3.9[227255]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:22:56 np0005590810 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 21 11:22:56 np0005590810 python3.9[227411]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:22:56 np0005590810 podman[227437]: 2026-01-21 16:22:56.916309267 +0000 UTC m=+0.098930834 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 21 11:22:56 np0005590810 systemd[1]: Reloading.
Jan 21 11:22:57 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:22:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:57.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:57 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:22:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 21 11:22:57 np0005590810 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 21 11:22:57 np0005590810 systemd[1]: Starting Open-iSCSI...
Jan 21 11:22:57 np0005590810 kernel: Loading iSCSI transport class v2.0-870.
Jan 21 11:22:57 np0005590810 systemd[1]: Started Open-iSCSI.
Jan 21 11:22:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:57.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:57 np0005590810 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 21 11:22:57 np0005590810 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:22:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 21 11:22:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:22:57 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:22:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:22:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:57.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:22:58 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:22:58 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:22:58 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:22:58 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:22:58 np0005590810 podman[227805]: 2026-01-21 16:22:58.380105475 +0000 UTC m=+0.043764857 container create 77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:22:58 np0005590810 systemd[1]: Started libpod-conmon-77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2.scope.
Jan 21 11:22:58 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:22:58 np0005590810 podman[227805]: 2026-01-21 16:22:58.36238406 +0000 UTC m=+0.026043462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:22:58 np0005590810 podman[227805]: 2026-01-21 16:22:58.472405538 +0000 UTC m=+0.136064950 container init 77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_keldysh, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:22:58 np0005590810 podman[227805]: 2026-01-21 16:22:58.483481161 +0000 UTC m=+0.147140543 container start 77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_keldysh, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:22:58 np0005590810 funny_keldysh[227821]: 167 167
Jan 21 11:22:58 np0005590810 systemd[1]: libpod-77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2.scope: Deactivated successfully.
Jan 21 11:22:58 np0005590810 conmon[227821]: conmon 77f0c6e3d98ac32a67e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2.scope/container/memory.events
Jan 21 11:22:58 np0005590810 podman[227805]: 2026-01-21 16:22:58.48816988 +0000 UTC m=+0.151829262 container attach 77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_keldysh, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 11:22:58 np0005590810 podman[227805]: 2026-01-21 16:22:58.492814718 +0000 UTC m=+0.156474100 container died 77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_keldysh, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:22:58 np0005590810 python3.9[227797]: ansible-ansible.builtin.service_facts Invoked
Jan 21 11:22:58 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d66e9ae51de1e4f96d783d1266ecc5fb49f4bbb90b4d00ad0192c3d5bca5f7b5-merged.mount: Deactivated successfully.
Jan 21 11:22:58 np0005590810 podman[227805]: 2026-01-21 16:22:58.53677562 +0000 UTC m=+0.200435002 container remove 77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 11:22:58 np0005590810 systemd[1]: libpod-conmon-77f0c6e3d98ac32a67e336ea05fa896c9d6b4504e572b301799906787fe555b2.scope: Deactivated successfully.
Jan 21 11:22:58 np0005590810 network[227855]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 11:22:58 np0005590810 network[227856]: 'network-scripts' will be removed from distribution in near future.
Jan 21 11:22:58 np0005590810 network[227857]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 11:22:58 np0005590810 podman[227876]: 2026-01-21 16:22:58.712696349 +0000 UTC m=+0.049173170 container create 867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:22:58 np0005590810 podman[227876]: 2026-01-21 16:22:58.69076446 +0000 UTC m=+0.027241301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:22:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:22:58.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a80000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:22:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:22:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:22:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:22:59.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:59 np0005590810 systemd[1]: Started libpod-conmon-867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2.scope.
Jan 21 11:22:59 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:22:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cf2ee0819edf28b0606f54cfbe560e061a3131221043458add4d6d86c021d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cf2ee0819edf28b0606f54cfbe560e061a3131221043458add4d6d86c021d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cf2ee0819edf28b0606f54cfbe560e061a3131221043458add4d6d86c021d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cf2ee0819edf28b0606f54cfbe560e061a3131221043458add4d6d86c021d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:59 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cf2ee0819edf28b0606f54cfbe560e061a3131221043458add4d6d86c021d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:22:59 np0005590810 podman[227876]: 2026-01-21 16:22:59.530026466 +0000 UTC m=+0.866503317 container init 867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:22:59 np0005590810 podman[227876]: 2026-01-21 16:22:59.540328114 +0000 UTC m=+0.876804935 container start 867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 21 11:22:59 np0005590810 podman[227876]: 2026-01-21 16:22:59.543558727 +0000 UTC m=+0.880035588 container attach 867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:22:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 493 B/s wr, 2 op/s
Jan 21 11:22:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:22:59 np0005590810 ecstatic_jones[227926]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:22:59 np0005590810 ecstatic_jones[227926]: --> All data devices are unavailable
Jan 21 11:22:59 np0005590810 systemd[1]: libpod-867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2.scope: Deactivated successfully.
Jan 21 11:22:59 np0005590810 podman[227876]: 2026-01-21 16:22:59.917048605 +0000 UTC m=+1.253525436 container died 867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:22:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:22:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:22:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:22:59.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:22:59 np0005590810 systemd[1]: var-lib-containers-storage-overlay-73cf2ee0819edf28b0606f54cfbe560e061a3131221043458add4d6d86c021d8-merged.mount: Deactivated successfully.
Jan 21 11:22:59 np0005590810 podman[227876]: 2026-01-21 16:22:59.972663968 +0000 UTC m=+1.309140809 container remove 867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_jones, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:22:59 np0005590810 systemd[1]: libpod-conmon-867a6a025ac777587d999497d25b4f818927cfaa0800ca69d48b8e6d624fd0a2.scope: Deactivated successfully.
Jan 21 11:23:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:00 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:00 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70000fb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:00 np0005590810 podman[228092]: 2026-01-21 16:23:00.598285102 +0000 UTC m=+0.051806212 container create 95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:23:00 np0005590810 systemd[1]: Started libpod-conmon-95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f.scope.
Jan 21 11:23:00 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:23:00 np0005590810 podman[228092]: 2026-01-21 16:23:00.574335889 +0000 UTC m=+0.027857019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:23:00 np0005590810 podman[228092]: 2026-01-21 16:23:00.685872025 +0000 UTC m=+0.139393135 container init 95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:23:00 np0005590810 podman[228092]: 2026-01-21 16:23:00.694114438 +0000 UTC m=+0.147635548 container start 95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:23:00 np0005590810 podman[228092]: 2026-01-21 16:23:00.698301581 +0000 UTC m=+0.151822731 container attach 95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 21 11:23:00 np0005590810 distracted_torvalds[228113]: 167 167
Jan 21 11:23:00 np0005590810 systemd[1]: libpod-95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f.scope: Deactivated successfully.
Jan 21 11:23:00 np0005590810 podman[228092]: 2026-01-21 16:23:00.701381909 +0000 UTC m=+0.154903019 container died 95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:23:00 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6c9772c94160e1957d97d9c9526f5213e0c5dd1cb7d03aff534e536cbc233c8c-merged.mount: Deactivated successfully.
Jan 21 11:23:00 np0005590810 podman[228092]: 2026-01-21 16:23:00.741201589 +0000 UTC m=+0.194722699 container remove 95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Jan 21 11:23:00 np0005590810 systemd[1]: libpod-conmon-95119391634e004b8a277808611fdf781bdefcbf6910b7d9d3a455e4565b972f.scope: Deactivated successfully.
Jan 21 11:23:00 np0005590810 podman[228146]: 2026-01-21 16:23:00.919138112 +0000 UTC m=+0.049732117 container create 31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:23:00 np0005590810 systemd[1]: Started libpod-conmon-31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2.scope.
Jan 21 11:23:00 np0005590810 podman[228146]: 2026-01-21 16:23:00.896992215 +0000 UTC m=+0.027586240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:23:00 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:23:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523ca4f082eef84c34998d236343a0eeaa4a4405fae7dcab26214ddd5c240079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:23:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523ca4f082eef84c34998d236343a0eeaa4a4405fae7dcab26214ddd5c240079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:23:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523ca4f082eef84c34998d236343a0eeaa4a4405fae7dcab26214ddd5c240079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:23:01 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/523ca4f082eef84c34998d236343a0eeaa4a4405fae7dcab26214ddd5c240079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:23:01 np0005590810 podman[228146]: 2026-01-21 16:23:01.020175542 +0000 UTC m=+0.150769567 container init 31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_volhard, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:23:01 np0005590810 podman[228146]: 2026-01-21 16:23:01.028282052 +0000 UTC m=+0.158876057 container start 31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:23:01 np0005590810 podman[228146]: 2026-01-21 16:23:01.032256278 +0000 UTC m=+0.162850303 container attach 31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_volhard, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:23:01 np0005590810 musing_volhard[228168]: {
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:    "0": [
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:        {
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "devices": [
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "/dev/loop3"
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            ],
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "lv_name": "ceph_lv0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "lv_size": "21470642176",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "name": "ceph_lv0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "tags": {
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.cluster_name": "ceph",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.crush_device_class": "",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.encrypted": "0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.osd_id": "0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.type": "block",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.vdo": "0",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:                "ceph.with_tpm": "0"
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            },
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "type": "block",
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:            "vg_name": "ceph_vg0"
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:        }
Jan 21 11:23:01 np0005590810 musing_volhard[228168]:    ]
Jan 21 11:23:01 np0005590810 musing_volhard[228168]: }
Jan 21 11:23:01 np0005590810 podman[228146]: 2026-01-21 16:23:01.362326091 +0000 UTC m=+0.492920126 container died 31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Jan 21 11:23:01 np0005590810 systemd[1]: libpod-31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2.scope: Deactivated successfully.
Jan 21 11:23:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:01 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c001040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:01 np0005590810 systemd[1]: var-lib-containers-storage-overlay-523ca4f082eef84c34998d236343a0eeaa4a4405fae7dcab26214ddd5c240079-merged.mount: Deactivated successfully.
Jan 21 11:23:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:23:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:01.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:23:01 np0005590810 podman[228146]: 2026-01-21 16:23:01.412307084 +0000 UTC m=+0.542901079 container remove 31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_volhard, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:23:01 np0005590810 systemd[1]: libpod-conmon-31afed14979f256e6df09e5bb507ea66a8a7bdf386e5853b4e530e951d99d7d2.scope: Deactivated successfully.
Jan 21 11:23:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 616 B/s wr, 2 op/s
Jan 21 11:23:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:01.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:02 np0005590810 podman[228325]: 2026-01-21 16:23:02.060904302 +0000 UTC m=+0.048497947 container create 960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:23:02 np0005590810 systemd[1]: Started libpod-conmon-960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491.scope.
Jan 21 11:23:02 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:23:02 np0005590810 podman[228325]: 2026-01-21 16:23:02.043683653 +0000 UTC m=+0.031277318 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:23:02 np0005590810 podman[228325]: 2026-01-21 16:23:02.14267736 +0000 UTC m=+0.130271025 container init 960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:23:02 np0005590810 podman[228325]: 2026-01-21 16:23:02.150937283 +0000 UTC m=+0.138530928 container start 960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:23:02 np0005590810 podman[228325]: 2026-01-21 16:23:02.154722773 +0000 UTC m=+0.142316438 container attach 960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 21 11:23:02 np0005590810 festive_liskov[228345]: 167 167
Jan 21 11:23:02 np0005590810 systemd[1]: libpod-960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491.scope: Deactivated successfully.
Jan 21 11:23:02 np0005590810 podman[228325]: 2026-01-21 16:23:02.157368847 +0000 UTC m=+0.144962492 container died 960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 21 11:23:02 np0005590810 systemd[1]: var-lib-containers-storage-overlay-797145dff0ddd4a42ebd64e49ea89f1d8cc4d60c785e3c81b8145edc12c31815-merged.mount: Deactivated successfully.
Jan 21 11:23:02 np0005590810 podman[228325]: 2026-01-21 16:23:02.194434659 +0000 UTC m=+0.182028304 container remove 960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:23:02 np0005590810 systemd[1]: libpod-conmon-960c0d20b11f4cd6351c2f7a5b77592301541856a9aeefa64ede8640ba4f9491.scope: Deactivated successfully.
Jan 21 11:23:02 np0005590810 podman[228392]: 2026-01-21 16:23:02.370775611 +0000 UTC m=+0.050143399 container create 08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_swirles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 21 11:23:02 np0005590810 systemd[1]: Started libpod-conmon-08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad.scope.
Jan 21 11:23:02 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:23:02 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5c5d5270f003a3192821519f5304fff93fbe6e225c0810a66b6cbf0477c59a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:23:02 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5c5d5270f003a3192821519f5304fff93fbe6e225c0810a66b6cbf0477c59a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:23:02 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5c5d5270f003a3192821519f5304fff93fbe6e225c0810a66b6cbf0477c59a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:23:02 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5c5d5270f003a3192821519f5304fff93fbe6e225c0810a66b6cbf0477c59a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:23:02 np0005590810 podman[228392]: 2026-01-21 16:23:02.349607527 +0000 UTC m=+0.028975285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:23:02 np0005590810 podman[228392]: 2026-01-21 16:23:02.448005683 +0000 UTC m=+0.127373441 container init 08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_swirles, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:23:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162302 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:23:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:02 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c001040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:02 np0005590810 podman[228392]: 2026-01-21 16:23:02.456109142 +0000 UTC m=+0.135476890 container start 08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_swirles, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:23:02 np0005590810 podman[228392]: 2026-01-21 16:23:02.459546691 +0000 UTC m=+0.138914469 container attach 08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:23:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:02 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:03 np0005590810 lvm[228485]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:23:03 np0005590810 lvm[228485]: VG ceph_vg0 finished
Jan 21 11:23:03 np0005590810 determined_swirles[228409]: {}
Jan 21 11:23:03 np0005590810 systemd[1]: libpod-08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad.scope: Deactivated successfully.
Jan 21 11:23:03 np0005590810 systemd[1]: libpod-08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad.scope: Consumed 1.336s CPU time.
Jan 21 11:23:03 np0005590810 podman[228392]: 2026-01-21 16:23:03.280786103 +0000 UTC m=+0.960153901 container died 08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_swirles, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:23:03 np0005590810 systemd[1]: var-lib-containers-storage-overlay-fb5c5d5270f003a3192821519f5304fff93fbe6e225c0810a66b6cbf0477c59a-merged.mount: Deactivated successfully.
Jan 21 11:23:03 np0005590810 podman[228392]: 2026-01-21 16:23:03.337379798 +0000 UTC m=+1.016747546 container remove 08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_swirles, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:23:03 np0005590810 systemd[1]: libpod-conmon-08e0d07ced0d1e52d6ffe6809f7b8e019f77cd9f04f9b77b76416aad7d201fad.scope: Deactivated successfully.
Jan 21 11:23:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:23:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:03 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:03.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:23:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:23:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:23:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162303 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:23:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 616 B/s wr, 2 op/s
Jan 21 11:23:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:03.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:04 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:23:04 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:23:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:04 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c001040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:04 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:05 np0005590810 python3.9[228656]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:23:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:05 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:05.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 369 B/s rd, 123 B/s wr, 0 op/s
Jan 21 11:23:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:05] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:05] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:06 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:06 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:07.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:07 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a64001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:07.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 102 B/s wr, 0 op/s
Jan 21 11:23:07 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 11:23:07 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 11:23:07 np0005590810 systemd[1]: Reloading.
Jan 21 11:23:07 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:23:07 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:23:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:07.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:08 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 11:23:08 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 11:23:08 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 11:23:08 np0005590810 systemd[1]: run-raaaafd5959a346daa299d81c74b74e08.service: Deactivated successfully.
Jan 21 11:23:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:08 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:08 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:08.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:23:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:08.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:23:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:23:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:23:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:23:09 np0005590810 python3.9[228977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 21 11:23:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:09 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:23:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:09.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:23:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:23:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:23:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:23:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:23:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:23:09 np0005590810 podman[229028]: 2026-01-21 16:23:09.725426624 +0000 UTC m=+0.101824768 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 21 11:23:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:09.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:10 np0005590810 python3.9[229156]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 21 11:23:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:10 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:10 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:11 np0005590810 python3.9[229313]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:23:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:11 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a64001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:11.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 21 11:23:11 np0005590810 python3.9[229437]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012590.5582337-504-203667050669694/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:23:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:11.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:23:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:12 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:12 np0005590810 python3.9[229589]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:12 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:13 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:13.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:23:13 np0005590810 python3.9[229743]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:23:13 np0005590810 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 21 11:23:13 np0005590810 systemd[1]: Stopped Load Kernel Modules.
Jan 21 11:23:13 np0005590810 systemd[1]: Stopping Load Kernel Modules...
Jan 21 11:23:13 np0005590810 systemd[1]: Starting Load Kernel Modules...
Jan 21 11:23:13 np0005590810 systemd[1]: Finished Load Kernel Modules.
Jan 21 11:23:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:13.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:14 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:23:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:14 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a64001fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:14 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c0039e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:14 np0005590810 python3.9[229899]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.872164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012594872206, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1122, "num_deletes": 255, "total_data_size": 1964264, "memory_usage": 1997488, "flush_reason": "Manual Compaction"}
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012594890459, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1913519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17602, "largest_seqno": 18722, "table_properties": {"data_size": 1908255, "index_size": 2724, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10832, "raw_average_key_size": 18, "raw_value_size": 1897623, "raw_average_value_size": 3254, "num_data_blocks": 122, "num_entries": 583, "num_filter_entries": 583, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769012494, "oldest_key_time": 1769012494, "file_creation_time": 1769012594, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 18348 microseconds, and 6734 cpu microseconds.
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.890509) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1913519 bytes OK
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.890537) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.892367) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.892385) EVENT_LOG_v1 {"time_micros": 1769012594892380, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.892409) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1959239, prev total WAL file size 1959239, number of live WAL files 2.
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.893364) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1868KB)], [38(9878KB)]
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012594893466, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 12028839, "oldest_snapshot_seqno": -1}
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4824 keys, 11589172 bytes, temperature: kUnknown
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012594977170, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 11589172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11555493, "index_size": 20513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 122946, "raw_average_key_size": 25, "raw_value_size": 11466546, "raw_average_value_size": 2376, "num_data_blocks": 842, "num_entries": 4824, "num_filter_entries": 4824, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769012594, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.977695) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 11589172 bytes
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.979474) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.4 rd, 138.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.6 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(12.3) write-amplify(6.1) OK, records in: 5348, records dropped: 524 output_compression: NoCompression
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.979501) EVENT_LOG_v1 {"time_micros": 1769012594979490, "job": 18, "event": "compaction_finished", "compaction_time_micros": 83885, "compaction_time_cpu_micros": 34773, "output_level": 6, "num_output_files": 1, "total_output_size": 11589172, "num_input_records": 5348, "num_output_records": 4824, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012594980024, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012594982295, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.893124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.982357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.982366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.982370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.982374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:23:14 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:23:14.982379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:23:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:15 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:23:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:15.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:23:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:23:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:15.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:16] "GET /metrics HTTP/1.1" 200 48354 "" "Prometheus/2.51.0"
Jan 21 11:23:16 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:16] "GET /metrics HTTP/1.1" 200 48354 "" "Prometheus/2.51.0"
Jan 21 11:23:16 np0005590810 python3.9[230054]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:23:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:16 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:16 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:17 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:23:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:17 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:23:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:17.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:17 np0005590810 python3.9[230207]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:23:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:17 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:23:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:17.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:23:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:23:17 np0005590810 python3.9[230331]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012596.682436-657-116072442314809/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:17.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:18 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:18 np0005590810 python3.9[230483]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:23:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:18 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:18.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:23:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:18.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:23:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:18.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:23:19 np0005590810 python3.9[230637]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:19 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c0039e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.003000097s ======
Jan 21 11:23:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:19.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000097s
Jan 21 11:23:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:23:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:20 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:23:20 np0005590810 python3.9[230815]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:20 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:20 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:20 np0005590810 python3.9[230967]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:21 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:21.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:23:21 np0005590810 python3.9[231121]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:23:22.010 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:23:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:23:22.011 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:23:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:23:22.011 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:23:22 np0005590810 python3.9[231273]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:22 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c0039e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:22 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:22 np0005590810 python3.9[231425]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:23 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:23.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:23 np0005590810 python3.9[231579]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:23:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:24 np0005590810 python3.9[231731]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:23:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:23:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:23:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:24 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:24 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c0039e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:25 np0005590810 python3.9[231886]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:23:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:25 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:25.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:23:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162325 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:23:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:23:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:26.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:23:26 np0005590810 python3.9[232040]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:26 np0005590810 systemd[1]: Listening on multipathd control socket.
Jan 21 11:23:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:26 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:26 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:26 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:26 np0005590810 python3.9[232196]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:27.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:23:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:27.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:27 np0005590810 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 21 11:23:27 np0005590810 podman[232199]: 2026-01-21 16:23:27.09273811 +0000 UTC m=+0.070746736 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 11:23:27 np0005590810 udevadm[232215]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 21 11:23:27 np0005590810 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 21 11:23:27 np0005590810 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 21 11:23:27 np0005590810 multipathd[232224]: --------start up--------
Jan 21 11:23:27 np0005590810 multipathd[232224]: read /etc/multipath.conf
Jan 21 11:23:27 np0005590810 multipathd[232224]: path checkers start up
Jan 21 11:23:27 np0005590810 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 21 11:23:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:27 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c0039e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:27.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 21 11:23:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:28.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:28 np0005590810 python3.9[232383]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 21 11:23:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:28 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:28 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:28.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:29 np0005590810 python3.9[232536]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 21 11:23:29 np0005590810 kernel: Key type psk registered
Jan 21 11:23:29 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:29 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:29.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 21 11:23:29 np0005590810 python3.9[232698]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:23:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:30 np0005590810 python3.9[232821]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769012609.3331363-1047-121659552608525/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:30 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c0039e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:30 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:31 np0005590810 python3.9[232976]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:31 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:31 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:31.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Jan 21 11:23:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:23:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:32.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:23:32 np0005590810 python3.9[233129]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:23:32 np0005590810 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 21 11:23:32 np0005590810 systemd[1]: Stopped Load Kernel Modules.
Jan 21 11:23:32 np0005590810 systemd[1]: Stopping Load Kernel Modules...
Jan 21 11:23:32 np0005590810 systemd[1]: Starting Load Kernel Modules...
Jan 21 11:23:32 np0005590810 systemd[1]: Finished Load Kernel Modules.
Jan 21 11:23:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:32 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:32 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58000d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:33 np0005590810 python3.9[233286]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 11:23:33 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:33 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:33.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:23:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:34.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:34 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:34 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:34 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:35 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a580018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:35.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:23:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:35] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:35] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:35 np0005590810 systemd[1]: Reloading.
Jan 21 11:23:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:36.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:36 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:23:36 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:23:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 21 11:23:36 np0005590810 systemd[1]: Reloading.
Jan 21 11:23:36 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:23:36 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:23:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:36 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a500016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:36 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:36 np0005590810 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 21 11:23:36 np0005590810 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 21 11:23:36 np0005590810 lvm[233403]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:23:36 np0005590810 lvm[233403]: VG ceph_vg0 finished
Jan 21 11:23:36 np0005590810 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 11:23:37 np0005590810 systemd[1]: Starting man-db-cache-update.service...
Jan 21 11:23:37 np0005590810 systemd[1]: Reloading.
Jan 21 11:23:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:37.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:23:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:37.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:23:37 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:23:37 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:23:37 np0005590810 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 11:23:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:37 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:37.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:23:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:38.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:38 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a580018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:38 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:38 np0005590810 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 11:23:38 np0005590810 systemd[1]: Finished man-db-cache-update.service.
Jan 21 11:23:38 np0005590810 systemd[1]: man-db-cache-update.service: Consumed 1.843s CPU time.
Jan 21 11:23:38 np0005590810 systemd[1]: run-rd94eb5902ee94c71ac904a617ccb82a0.service: Deactivated successfully.
Jan 21 11:23:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:38.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:23:39
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.nfs', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.log', '.rgw.root']
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:23:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:23:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:23:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:39 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:23:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:23:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:39.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:23:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:23:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:23:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:40.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:23:40 np0005590810 podman[234755]: 2026-01-21 16:23:40.431287226 +0000 UTC m=+0.102146858 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:23:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:40 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:40 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a580018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:40 np0005590810 python3.9[234803]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:23:40 np0005590810 systemd[1]: Stopping Open-iSCSI...
Jan 21 11:23:40 np0005590810 iscsid[227542]: iscsid shutting down.
Jan 21 11:23:40 np0005590810 systemd[1]: iscsid.service: Deactivated successfully.
Jan 21 11:23:40 np0005590810 systemd[1]: Stopped Open-iSCSI.
Jan 21 11:23:40 np0005590810 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 21 11:23:40 np0005590810 systemd[1]: Starting Open-iSCSI...
Jan 21 11:23:40 np0005590810 systemd[1]: Started Open-iSCSI.
Jan 21 11:23:41 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:41 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:41.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:23:41 np0005590810 python3.9[234968]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:23:41 np0005590810 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 21 11:23:41 np0005590810 multipathd[232224]: exit (signal)
Jan 21 11:23:41 np0005590810 multipathd[232224]: --------shut down-------
Jan 21 11:23:41 np0005590810 systemd[1]: multipathd.service: Deactivated successfully.
Jan 21 11:23:41 np0005590810 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 21 11:23:41 np0005590810 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 21 11:23:41 np0005590810 multipathd[234974]: --------start up--------
Jan 21 11:23:41 np0005590810 multipathd[234974]: read /etc/multipath.conf
Jan 21 11:23:41 np0005590810 multipathd[234974]: path checkers start up
Jan 21 11:23:41 np0005590810 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 21 11:23:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:42.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:42 np0005590810 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 21 11:23:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:42 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:42 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:42 np0005590810 python3.9[235132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 11:23:43 np0005590810 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 21 11:23:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:43 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58002d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:43.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:23:43 np0005590810 python3.9[235291]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:23:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:44.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:44 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:44 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:44 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:44 np0005590810 python3.9[235443]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 11:23:44 np0005590810 systemd[1]: Reloading.
Jan 21 11:23:45 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:23:45 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:23:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:45 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:45.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:23:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:45] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:45] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:46.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:46 np0005590810 python3.9[235630]: ansible-ansible.builtin.service_facts Invoked
Jan 21 11:23:46 np0005590810 network[235647]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 11:23:46 np0005590810 network[235648]: 'network-scripts' will be removed from distribution in near future.
Jan 21 11:23:46 np0005590810 network[235649]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 11:23:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:46 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:46 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:47.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:47 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:47.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:23:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:48.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:48 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58002d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:48 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:48.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:49 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a500032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:49.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:23:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:50.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:50 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:50 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:51 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:51.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:23:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:52.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:52 np0005590810 python3.9[235928]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:52 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:52 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:53 np0005590810 python3.9[236082]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:53 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:53.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:23:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:54.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:54 np0005590810 python3.9[236236]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:23:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:23:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:54 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:54 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:23:54 np0005590810 python3.9[236389]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:55 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:55.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:23:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:55] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:23:55] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:23:55 np0005590810 python3.9[236544]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:55 np0005590810 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 21 11:23:55 np0005590810 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 21 11:23:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:56.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:56 np0005590810 python3.9[236699]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:56 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:56 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:57.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:57 np0005590810 python3.9[236853]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:57 np0005590810 podman[236856]: 2026-01-21 16:23:57.412510111 +0000 UTC m=+0.079244266 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 21 11:23:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:57 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:57.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:23:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:23:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:23:58.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:23:58 np0005590810 python3.9[237026]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:23:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:58 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:58 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:23:58.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:23:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:23:59 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a780032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:23:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:23:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:23:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:23:59.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:23:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:23:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:00.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:00 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:00 np0005590810 python3.9[237206]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:00 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a640030e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:01 np0005590810 python3.9[237360]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:01 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70001900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:01.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:24:01 np0005590810 python3.9[237513]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:02 np0005590810 python3.9[237665]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:02 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:02 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:03 np0005590810 python3.9[237819]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:03 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:03.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:24:03 np0005590810 python3.9[237972]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:04.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:24:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:24:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:04 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70001900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:04 np0005590810 python3.9[238192]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:04 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:04 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:04 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:24:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:24:05 np0005590810 python3.9[238414]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:05 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:24:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:05.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:24:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:05] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:24:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:05] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:24:05 np0005590810 podman[238635]: 2026-01-21 16:24:05.646507279 +0000 UTC m=+0.039862932 container create f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:24:05 np0005590810 systemd[1]: Started libpod-conmon-f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba.scope.
Jan 21 11:24:05 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:24:05 np0005590810 podman[238635]: 2026-01-21 16:24:05.629652932 +0000 UTC m=+0.023008615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:24:05 np0005590810 podman[238635]: 2026-01-21 16:24:05.729717172 +0000 UTC m=+0.123072855 container init f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:24:05 np0005590810 podman[238635]: 2026-01-21 16:24:05.738963247 +0000 UTC m=+0.132318900 container start f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mendel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:24:05 np0005590810 podman[238635]: 2026-01-21 16:24:05.743031177 +0000 UTC m=+0.136386830 container attach f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:24:05 np0005590810 awesome_mendel[238684]: 167 167
Jan 21 11:24:05 np0005590810 systemd[1]: libpod-f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba.scope: Deactivated successfully.
Jan 21 11:24:05 np0005590810 podman[238635]: 2026-01-21 16:24:05.745928149 +0000 UTC m=+0.139283802 container died f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:24:05 np0005590810 systemd[1]: var-lib-containers-storage-overlay-41653968a266d21b2bd61310682c0cccdfe1baa42abaffd915993f3bdd1e768a-merged.mount: Deactivated successfully.
Jan 21 11:24:05 np0005590810 podman[238635]: 2026-01-21 16:24:05.787832045 +0000 UTC m=+0.181187698 container remove f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mendel, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:24:05 np0005590810 systemd[1]: libpod-conmon-f164d902f832c66c9abfb375490b4b6fb190dc07c680607f0a8223fd7ba474ba.scope: Deactivated successfully.
Jan 21 11:24:05 np0005590810 python3.9[238686]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:05 np0005590810 podman[238709]: 2026-01-21 16:24:05.953442275 +0000 UTC m=+0.042307190 container create b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:05 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:24:05 np0005590810 systemd[1]: Started libpod-conmon-b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7.scope.
Jan 21 11:24:06 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:24:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a935a6723df7130ef9d9a435eafc7503378d2ba3cde45e4ae6d7f0176ec6420f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a935a6723df7130ef9d9a435eafc7503378d2ba3cde45e4ae6d7f0176ec6420f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a935a6723df7130ef9d9a435eafc7503378d2ba3cde45e4ae6d7f0176ec6420f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:06 np0005590810 podman[238709]: 2026-01-21 16:24:05.933803469 +0000 UTC m=+0.022668404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:24:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a935a6723df7130ef9d9a435eafc7503378d2ba3cde45e4ae6d7f0176ec6420f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:06 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a935a6723df7130ef9d9a435eafc7503378d2ba3cde45e4ae6d7f0176ec6420f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:06 np0005590810 podman[238709]: 2026-01-21 16:24:06.040303033 +0000 UTC m=+0.129167948 container init b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:24:06 np0005590810 podman[238709]: 2026-01-21 16:24:06.04832122 +0000 UTC m=+0.137186135 container start b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_beaver, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:24:06 np0005590810 podman[238709]: 2026-01-21 16:24:06.052676738 +0000 UTC m=+0.141541653 container attach b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:24:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:06.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:06 np0005590810 nostalgic_beaver[238749]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:24:06 np0005590810 nostalgic_beaver[238749]: --> All data devices are unavailable
Jan 21 11:24:06 np0005590810 systemd[1]: libpod-b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7.scope: Deactivated successfully.
Jan 21 11:24:06 np0005590810 conmon[238749]: conmon b41a0fa0748118d42bd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7.scope/container/memory.events
Jan 21 11:24:06 np0005590810 podman[238709]: 2026-01-21 16:24:06.428852841 +0000 UTC m=+0.517717756 container died b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:24:06 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a935a6723df7130ef9d9a435eafc7503378d2ba3cde45e4ae6d7f0176ec6420f-merged.mount: Deactivated successfully.
Jan 21 11:24:06 np0005590810 podman[238709]: 2026-01-21 16:24:06.479602369 +0000 UTC m=+0.568467284 container remove b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_beaver, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:24:06 np0005590810 systemd[1]: libpod-conmon-b41a0fa0748118d42bd07fbb6c99f598d436897230a5ee6599ac218a9bc409c7.scope: Deactivated successfully.
Jan 21 11:24:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:06 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:06 np0005590810 python3.9[238887]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:06 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:07 np0005590810 podman[239147]: 2026-01-21 16:24:07.080378812 +0000 UTC m=+0.042866328 container create 40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lovelace, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:24:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:07.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Jan 21 11:24:07 np0005590810 systemd[1]: Started libpod-conmon-40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a.scope.
Jan 21 11:24:07 np0005590810 python3.9[239124]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:07 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:24:07 np0005590810 podman[239147]: 2026-01-21 16:24:07.060980774 +0000 UTC m=+0.023468310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:24:07 np0005590810 podman[239147]: 2026-01-21 16:24:07.172308423 +0000 UTC m=+0.134795959 container init 40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lovelace, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:24:07 np0005590810 podman[239147]: 2026-01-21 16:24:07.182690014 +0000 UTC m=+0.145177530 container start 40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 21 11:24:07 np0005590810 xenodochial_lovelace[239163]: 167 167
Jan 21 11:24:07 np0005590810 podman[239147]: 2026-01-21 16:24:07.190192134 +0000 UTC m=+0.152679670 container attach 40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:24:07 np0005590810 systemd[1]: libpod-40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a.scope: Deactivated successfully.
Jan 21 11:24:07 np0005590810 podman[239147]: 2026-01-21 16:24:07.192099534 +0000 UTC m=+0.154587060 container died 40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lovelace, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:24:07 np0005590810 systemd[1]: var-lib-containers-storage-overlay-eb2628406ca4e18d97af9762079ec9227d13a7aaf0c638229f4b5c2964fb928b-merged.mount: Deactivated successfully.
Jan 21 11:24:07 np0005590810 podman[239147]: 2026-01-21 16:24:07.23368499 +0000 UTC m=+0.196172506 container remove 40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:24:07 np0005590810 systemd[1]: libpod-conmon-40e3f100c9685674cb3502070864eb690b3f04f69ac8ff14d1a5353e16ec868a.scope: Deactivated successfully.
Jan 21 11:24:07 np0005590810 podman[239264]: 2026-01-21 16:24:07.403430472 +0000 UTC m=+0.045795651 container create 5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:24:07 np0005590810 systemd[1]: Started libpod-conmon-5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756.scope.
Jan 21 11:24:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:07 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:07 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:24:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7719d9c996d124454b566e9529fcc0c85fa0a2d95d095105c717de4bd7bb450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:07.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7719d9c996d124454b566e9529fcc0c85fa0a2d95d095105c717de4bd7bb450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7719d9c996d124454b566e9529fcc0c85fa0a2d95d095105c717de4bd7bb450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:07 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7719d9c996d124454b566e9529fcc0c85fa0a2d95d095105c717de4bd7bb450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:07 np0005590810 podman[239264]: 2026-01-21 16:24:07.387122831 +0000 UTC m=+0.029488020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:24:07 np0005590810 podman[239264]: 2026-01-21 16:24:07.486548862 +0000 UTC m=+0.128914051 container init 5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:24:07 np0005590810 podman[239264]: 2026-01-21 16:24:07.494598728 +0000 UTC m=+0.136963907 container start 5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 21 11:24:07 np0005590810 podman[239264]: 2026-01-21 16:24:07.499151193 +0000 UTC m=+0.141516372 container attach 5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_pike, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:24:07 np0005590810 python3.9[239361]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:07 np0005590810 goofy_pike[239317]: {
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:    "0": [
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:        {
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "devices": [
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "/dev/loop3"
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            ],
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "lv_name": "ceph_lv0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "lv_size": "21470642176",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "name": "ceph_lv0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "tags": {
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.cluster_name": "ceph",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.crush_device_class": "",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.encrypted": "0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.osd_id": "0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.type": "block",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.vdo": "0",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:                "ceph.with_tpm": "0"
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            },
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "type": "block",
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:            "vg_name": "ceph_vg0"
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:        }
Jan 21 11:24:07 np0005590810 goofy_pike[239317]:    ]
Jan 21 11:24:07 np0005590810 goofy_pike[239317]: }
Jan 21 11:24:07 np0005590810 systemd[1]: libpod-5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756.scope: Deactivated successfully.
Jan 21 11:24:07 np0005590810 podman[239264]: 2026-01-21 16:24:07.830355262 +0000 UTC m=+0.472720441 container died 5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:24:07 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d7719d9c996d124454b566e9529fcc0c85fa0a2d95d095105c717de4bd7bb450-merged.mount: Deactivated successfully.
Jan 21 11:24:07 np0005590810 podman[239264]: 2026-01-21 16:24:07.877742033 +0000 UTC m=+0.520107212 container remove 5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_pike, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:24:07 np0005590810 systemd[1]: libpod-conmon-5af7c2139d829a646ca669a0e35b31dc8d82b64e1db5a3f6e906ba1945462756.scope: Deactivated successfully.
Jan 21 11:24:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:08.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:08 np0005590810 python3.9[239578]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:08 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:08 np0005590810 podman[239628]: 2026-01-21 16:24:08.524854414 +0000 UTC m=+0.043457076 container create cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 11:24:08 np0005590810 systemd[1]: Started libpod-conmon-cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7.scope.
Jan 21 11:24:08 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:24:08 np0005590810 podman[239628]: 2026-01-21 16:24:08.505506507 +0000 UTC m=+0.024109209 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:24:08 np0005590810 podman[239628]: 2026-01-21 16:24:08.604601396 +0000 UTC m=+0.123204088 container init cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:24:08 np0005590810 podman[239628]: 2026-01-21 16:24:08.611683432 +0000 UTC m=+0.130286104 container start cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:24:08 np0005590810 keen_bartik[239682]: 167 167
Jan 21 11:24:08 np0005590810 systemd[1]: libpod-cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7.scope: Deactivated successfully.
Jan 21 11:24:08 np0005590810 podman[239628]: 2026-01-21 16:24:08.619376677 +0000 UTC m=+0.137979379 container attach cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:24:08 np0005590810 podman[239628]: 2026-01-21 16:24:08.620178363 +0000 UTC m=+0.138781045 container died cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:24:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:08 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:08 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c50ccee8084615234e2f8e8dbe1a8d9960273662ccf706ed35a7b6afc9e02bfe-merged.mount: Deactivated successfully.
Jan 21 11:24:08 np0005590810 podman[239628]: 2026-01-21 16:24:08.656216891 +0000 UTC m=+0.174819563 container remove cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:24:08 np0005590810 systemd[1]: libpod-conmon-cda1e0d51c8647ccc1b41d2b97f8fdede1c1c1bc8081e2c4a52824f1516eb6b7.scope: Deactivated successfully.
Jan 21 11:24:08 np0005590810 podman[239781]: 2026-01-21 16:24:08.858519081 +0000 UTC m=+0.059156417 container create a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_euclid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:24:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:08.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:08 np0005590810 systemd[1]: Started libpod-conmon-a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c.scope.
Jan 21 11:24:08 np0005590810 podman[239781]: 2026-01-21 16:24:08.836078466 +0000 UTC m=+0.036715852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:24:08 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:24:08 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e21be93e35af1bc6b853df1fd5d443660b674f7943e6d5c315367fd5580814f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:08 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e21be93e35af1bc6b853df1fd5d443660b674f7943e6d5c315367fd5580814f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:08 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e21be93e35af1bc6b853df1fd5d443660b674f7943e6d5c315367fd5580814f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:08 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e21be93e35af1bc6b853df1fd5d443660b674f7943e6d5c315367fd5580814f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:08 np0005590810 podman[239781]: 2026-01-21 16:24:08.963617981 +0000 UTC m=+0.164255347 container init a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:24:08 np0005590810 podman[239781]: 2026-01-21 16:24:08.971797262 +0000 UTC m=+0.172434598 container start a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_euclid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:24:08 np0005590810 podman[239781]: 2026-01-21 16:24:08.975402437 +0000 UTC m=+0.176039773 container attach a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_euclid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:24:09 np0005590810 python3.9[239823]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Jan 21 11:24:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:24:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:24:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:24:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:24:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:09 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:09.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:09 np0005590810 lvm[240053]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:24:09 np0005590810 lvm[240053]: VG ceph_vg0 finished
Jan 21 11:24:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:24:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:24:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:24:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:24:09 np0005590810 gifted_euclid[239826]: {}
Jan 21 11:24:09 np0005590810 python3.9[240045]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:09 np0005590810 systemd[1]: libpod-a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c.scope: Deactivated successfully.
Jan 21 11:24:09 np0005590810 podman[239781]: 2026-01-21 16:24:09.730664205 +0000 UTC m=+0.931301541 container died a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Jan 21 11:24:09 np0005590810 systemd[1]: libpod-a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c.scope: Consumed 1.248s CPU time.
Jan 21 11:24:09 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3e21be93e35af1bc6b853df1fd5d443660b674f7943e6d5c315367fd5580814f-merged.mount: Deactivated successfully.
Jan 21 11:24:09 np0005590810 podman[239781]: 2026-01-21 16:24:09.781505297 +0000 UTC m=+0.982142633 container remove a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:24:09 np0005590810 systemd[1]: libpod-conmon-a0b748fd9d8e25134f6d31d1284d1540befae9609de33a49482f5e5f2ee4882c.scope: Deactivated successfully.
Jan 21 11:24:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:24:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:24:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:24:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:10.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:24:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:24:10 np0005590810 python3.9[240244]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:10 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:10 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:10 np0005590810 podman[240269]: 2026-01-21 16:24:10.749852428 +0000 UTC m=+0.124768128 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 21 11:24:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 444 B/s rd, 0 op/s
Jan 21 11:24:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:11 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:11 np0005590810 python3.9[240426]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:11.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:12.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:12 np0005590810 python3.9[240578]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 11:24:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:12 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:12 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Jan 21 11:24:13 np0005590810 python3.9[240731]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 11:24:13 np0005590810 systemd[1]: Reloading.
Jan 21 11:24:13 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:24:13 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:24:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:13 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:24:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:13.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:24:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:14.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:14 np0005590810 python3.9[240919]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:14 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:14 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Jan 21 11:24:15 np0005590810 python3.9[241073]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:15 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:15.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:15] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 21 11:24:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:15] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 21 11:24:15 np0005590810 python3.9[241227]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:24:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:16.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:24:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:16 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:16 np0005590810 python3.9[241380]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:16 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:17.084Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:24:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:17.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:24:17 np0005590810 python3.9[241534]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:17 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70002240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:17.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:17 np0005590810 python3.9[241688]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:24:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:18.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:24:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:18 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:18 np0005590810 python3.9[241841]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:18 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a6c0037f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:18.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:24:19 np0005590810 python3.9[241995]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 11:24:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:19 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:19.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:20.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:20 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:20 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:24:21 np0005590810 python3.9[242175]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:21 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:21.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:21 np0005590810 python3.9[242328]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:24:22.012 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:24:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:24:22.013 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:24:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:24:22.013 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:24:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:22.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:22 np0005590810 python3.9[242480]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:22 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a70003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:22 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:24:23 np0005590810 python3.9[242634]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:23 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:23 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:24:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:23.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:24:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:24:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:24.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:24:24 np0005590810 python3.9[242786]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:24:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:24:24 np0005590810 kernel: ganesha.nfsd[232846]: segfault at 50 ip 00007f0b0a48732e sp 00007f0a7f7fd210 error 4 in libntirpc.so.5.8[7f0b0a46c000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 21 11:24:24 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:24:24 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[225865]: 21/01/2026 16:24:24 : epoch 6970fd57 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0a50004000 fd 48 proxy ignored for local
Jan 21 11:24:24 np0005590810 systemd[1]: Started Process Core Dump (PID 242910/UID 0).
Jan 21 11:24:24 np0005590810 python3.9[242940]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:24:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:25.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:25 np0005590810 python3.9[243094]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:25] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Jan 21 11:24:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:25] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Jan 21 11:24:25 np0005590810 systemd-coredump[242918]: Process 225869 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 56:#012#0  0x00007f0b0a48732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:24:25 np0005590810 systemd[1]: systemd-coredump@9-242910-0.service: Deactivated successfully.
Jan 21 11:24:25 np0005590810 systemd[1]: systemd-coredump@9-242910-0.service: Consumed 1.252s CPU time.
Jan 21 11:24:25 np0005590810 podman[243198]: 2026-01-21 16:24:25.939388716 +0000 UTC m=+0.032979773 container died 0c03a276f648a0db622e602d6334f91f296fc3f32d5a575fe345dfd19c21221b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 21 11:24:25 np0005590810 systemd[1]: var-lib-containers-storage-overlay-4179a48b47073a528dec75668db0cc3c4cf18cb0b724fc4ab3f51cd3a0a471ff-merged.mount: Deactivated successfully.
Jan 21 11:24:25 np0005590810 podman[243198]: 2026-01-21 16:24:25.982773789 +0000 UTC m=+0.076364826 container remove 0c03a276f648a0db622e602d6334f91f296fc3f32d5a575fe345dfd19c21221b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:24:25 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:24:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:26.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:26 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:24:26 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.724s CPU time.
Jan 21 11:24:26 np0005590810 python3.9[243270]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:26 np0005590810 python3.9[243444]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:27.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:24:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:27.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:24:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:27.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:27 np0005590810 python3.9[243598]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:27 np0005590810 podman[243599]: 2026-01-21 16:24:27.673188581 +0000 UTC m=+0.086732576 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 21 11:24:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:28.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:28.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:24:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:29.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:24:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:30.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:24:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162430 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:24:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 21 11:24:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:31.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:32.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:33 np0005590810 python3.9[243775]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 21 11:24:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:24:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:24:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:33.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:24:34 np0005590810 python3.9[243929]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 11:24:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:24:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:34.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:24:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:35 np0005590810 python3.9[244088]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 11:24:35 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:24:35 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:24:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:24:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:35.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:35] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Jan 21 11:24:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:35] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Jan 21 11:24:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:36.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:36 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 10.
Jan 21 11:24:36 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:24:36 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.724s CPU time.
Jan 21 11:24:36 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:24:36 np0005590810 systemd-logind[795]: New session 55 of user zuul.
Jan 21 11:24:36 np0005590810 systemd[1]: Started Session 55 of User zuul.
Jan 21 11:24:36 np0005590810 systemd[1]: session-55.scope: Deactivated successfully.
Jan 21 11:24:36 np0005590810 systemd-logind[795]: Session 55 logged out. Waiting for processes to exit.
Jan 21 11:24:36 np0005590810 systemd-logind[795]: Removed session 55.
Jan 21 11:24:36 np0005590810 podman[244204]: 2026-01-21 16:24:36.577560479 +0000 UTC m=+0.045432930 container create a1089552432e211b0702f1f3ddfbe1ea899d7b5503c7d73a6a84a6c76e76b0c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:24:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2936eb6c242a2c94f4e272c14500fe13a462f02fb970f0a66d50f56623c53b6/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2936eb6c242a2c94f4e272c14500fe13a462f02fb970f0a66d50f56623c53b6/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2936eb6c242a2c94f4e272c14500fe13a462f02fb970f0a66d50f56623c53b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2936eb6c242a2c94f4e272c14500fe13a462f02fb970f0a66d50f56623c53b6/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:24:36 np0005590810 podman[244204]: 2026-01-21 16:24:36.649091711 +0000 UTC m=+0.116964162 container init a1089552432e211b0702f1f3ddfbe1ea899d7b5503c7d73a6a84a6c76e76b0c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:24:36 np0005590810 podman[244204]: 2026-01-21 16:24:36.558659157 +0000 UTC m=+0.026531608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:24:36 np0005590810 podman[244204]: 2026-01-21 16:24:36.65464324 +0000 UTC m=+0.122515671 container start a1089552432e211b0702f1f3ddfbe1ea899d7b5503c7d73a6a84a6c76e76b0c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:24:36 np0005590810 bash[244204]: a1089552432e211b0702f1f3ddfbe1ea899d7b5503c7d73a6a84a6c76e76b0c5
Jan 21 11:24:36 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:24:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:36 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:24:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:36 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:24:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:36 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:24:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:36 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:24:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:36 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:24:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:36 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:24:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:36 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:24:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:36 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:24:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:37.087Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:24:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:37.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:24:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:24:37 np0005590810 python3.9[244388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:37 np0005590810 python3.9[244510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012676.772332-2654-207109776958300/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162438 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:24:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:38.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:38 np0005590810 python3.9[244660]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:38.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:24:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:38.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:24:39 np0005590810 python3.9[244737]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:24:39
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.log', '.nfs', 'images', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:24:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:24:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:24:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:24:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:24:39 np0005590810 python3.9[244888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:40.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:40 np0005590810 python3.9[245034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012679.2281559-2654-3503850190272/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:40 np0005590810 python3.9[245184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 21 11:24:41 np0005590810 podman[245281]: 2026-01-21 16:24:41.315155585 +0000 UTC m=+0.100099426 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 21 11:24:41 np0005590810 python3.9[245321]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012680.427121-2654-48848029879788/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:41.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:24:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 8129 writes, 31K keys, 8129 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8129 writes, 1655 syncs, 4.91 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 710 writes, 1243 keys, 710 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s#012Interval WAL: 710 writes, 347 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 21 11:24:42 np0005590810 python3.9[245484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:42.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:42 np0005590810 python3.9[245605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012681.5913405-2654-221506383186449/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:42 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:24:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:42 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:24:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:42 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 21 11:24:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Jan 21 11:24:43 np0005590810 python3.9[245756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:43.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:43 np0005590810 python3.9[245878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012682.7389445-2654-250604595974587/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:43 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:24:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:43 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:24:43 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:43 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:24:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:24:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:44.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:24:44 np0005590810 python3.9[246030]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Jan 21 11:24:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:45.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:45] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:24:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:45] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 21 11:24:45 np0005590810 python3.9[246184]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:24:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:46.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:46 np0005590810 python3.9[246336]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:24:47 np0005590810 python3.9[246489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:47.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:24:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:47.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:47 np0005590810 python3.9[246613]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769012686.5967624-2975-42359241408176/.source _original_basename=.frdeqgv2 follow=False checksum=de0599c8d86168d08d4bd49a9894d7a58cf85e00 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 21 11:24:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:48.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:48 np0005590810 python3.9[246765]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:24:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:48.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 21 11:24:49 np0005590810 python3.9[246919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:49.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:49 np0005590810 python3.9[247040]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012688.95709-3053-101762031719351/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:24:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:50.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d48000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:50 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:50 np0005590810 python3.9[247206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 11:24:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 21 11:24:51 np0005590810 python3.9[247329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769012690.3396254-3098-253020084245701/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 11:24:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:51 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:24:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:51.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:24:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:52.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162452 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:24:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:52 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:52 np0005590810 python3.9[247481]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 21 11:24:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:52 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d48001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:53 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:24:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:53 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:24:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 21 11:24:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:53 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:53.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:53 np0005590810 python3.9[247635]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 11:24:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:54.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:24:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:24:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:54 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:54 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:24:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 21 11:24:55 np0005590810 python3[247788]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 11:24:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:55 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d48001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:55.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:55] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:24:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:24:55] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:24:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:56 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:24:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:56.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:56 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:56 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:57.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 21 11:24:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:57 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:57.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162458 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 2ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:24:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:24:58.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:58 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d480089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:58 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:58.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:24:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:24:58.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:24:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:24:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:24:59 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:24:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:24:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:24:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:24:59.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:24:59 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:00.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:00 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:00 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d480089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:25:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:01 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:25:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:01.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:25:01 np0005590810 podman[247845]: 2026-01-21 16:25:01.970847334 +0000 UTC m=+3.341661964 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 21 11:25:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:02.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:02 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:02 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 21 11:25:03 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:03 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d480096e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:03.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:04.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:04 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:04 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:04 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:04 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 21 11:25:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:05 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:05.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:05] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:25:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:05] "GET /metrics HTTP/1.1" 200 48352 "" "Prometheus/2.51.0"
Jan 21 11:25:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:06.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:06 np0005590810 podman[247802]: 2026-01-21 16:25:06.512041662 +0000 UTC m=+11.292630567 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 11:25:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:06 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d480096e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:06 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:06 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:06 np0005590810 podman[247942]: 2026-01-21 16:25:06.739494015 +0000 UTC m=+0.075729029 container create fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:25:06 np0005590810 podman[247942]: 2026-01-21 16:25:06.703307726 +0000 UTC m=+0.039542820 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 11:25:06 np0005590810 python3[247788]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 21 11:25:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:07.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:25:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:07.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:25:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:07.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:25:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 21 11:25:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:07 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c002160 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:07.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:08.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:08 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:08 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d480096e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:08 np0005590810 python3.9[248134]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:25:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:08.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:25:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:25:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:25:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:25:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:25:09 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:09 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:09.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:25:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:25:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:25:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:25:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:10.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:10 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:25:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:10 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:25:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:25:11 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:11 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4800a3f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.002000065s ======
Jan 21 11:25:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:11.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Jan 21 11:25:11 np0005590810 python3.9[248339]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 21 11:25:11 np0005590810 podman[248373]: 2026-01-21 16:25:11.708308967 +0000 UTC m=+0.081897468 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 21 11:25:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:25:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:25:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:12.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:12 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:12 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:12 np0005590810 python3.9[248548]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:25:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 259 B/s rd, 0 op/s
Jan 21 11:25:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 311 B/s rd, 0 op/s
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.014734) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012713014800, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1230, "num_deletes": 251, "total_data_size": 2313263, "memory_usage": 2359744, "flush_reason": "Manual Compaction"}
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012713034357, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2257770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18723, "largest_seqno": 19952, "table_properties": {"data_size": 2251888, "index_size": 3209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12251, "raw_average_key_size": 19, "raw_value_size": 2240224, "raw_average_value_size": 3619, "num_data_blocks": 141, "num_entries": 619, "num_filter_entries": 619, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769012596, "oldest_key_time": 1769012596, "file_creation_time": 1769012713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 19667 microseconds, and 7252 cpu microseconds.
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.034408) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2257770 bytes OK
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.034432) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.035943) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.035967) EVENT_LOG_v1 {"time_micros": 1769012713035954, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.035987) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2307882, prev total WAL file size 2307882, number of live WAL files 2.
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.036858) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(2204KB)], [41(11MB)]
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012713036963, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 13846942, "oldest_snapshot_seqno": -1}
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4923 keys, 11672299 bytes, temperature: kUnknown
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012713100419, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 11672299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11637974, "index_size": 20903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 125469, "raw_average_key_size": 25, "raw_value_size": 11547233, "raw_average_value_size": 2345, "num_data_blocks": 858, "num_entries": 4923, "num_filter_entries": 4923, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769012713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.100736) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 11672299 bytes
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.102286) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.4 rd, 184.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.1 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(11.3) write-amplify(5.2) OK, records in: 5443, records dropped: 520 output_compression: NoCompression
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.102310) EVENT_LOG_v1 {"time_micros": 1769012713102299, "job": 20, "event": "compaction_finished", "compaction_time_micros": 63402, "compaction_time_cpu_micros": 28875, "output_level": 6, "num_output_files": 1, "total_output_size": 11672299, "num_input_records": 5443, "num_output_records": 4923, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012713102997, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769012713105861, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.036748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.105958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.105966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.105968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.105969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:25:13.105971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:25:13 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:13 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:25:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:13.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:25:13 np0005590810 podman[248763]: 2026-01-21 16:25:13.69131083 +0000 UTC m=+0.055361021 container create 6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:25:13 np0005590810 systemd[1]: Started libpod-conmon-6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237.scope.
Jan 21 11:25:13 np0005590810 podman[248763]: 2026-01-21 16:25:13.668586176 +0000 UTC m=+0.032636417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:25:13 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:13 np0005590810 podman[248763]: 2026-01-21 16:25:13.787159168 +0000 UTC m=+0.151209389 container init 6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:25:13 np0005590810 podman[248763]: 2026-01-21 16:25:13.793803193 +0000 UTC m=+0.157853384 container start 6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:25:13 np0005590810 podman[248763]: 2026-01-21 16:25:13.797511193 +0000 UTC m=+0.161561404 container attach 6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:25:13 np0005590810 brave_grothendieck[248808]: 167 167
Jan 21 11:25:13 np0005590810 systemd[1]: libpod-6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237.scope: Deactivated successfully.
Jan 21 11:25:13 np0005590810 podman[248763]: 2026-01-21 16:25:13.802402021 +0000 UTC m=+0.166452232 container died 6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Jan 21 11:25:13 np0005590810 systemd[1]: var-lib-containers-storage-overlay-fcda62437eacebd088d27a97547fe9b5b49b4f99a5fdf19acddf04f6b0c1ad3b-merged.mount: Deactivated successfully.
Jan 21 11:25:13 np0005590810 podman[248763]: 2026-01-21 16:25:13.85836079 +0000 UTC m=+0.222410981 container remove 6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 11:25:13 np0005590810 systemd[1]: libpod-conmon-6b7f8459c49b3d71c29b46e657ae366d4b07d3fd5f38d248bc3c484168141237.scope: Deactivated successfully.
Jan 21 11:25:13 np0005590810 python3[248805]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:13 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:25:14 np0005590810 podman[248833]: 2026-01-21 16:25:14.034359319 +0000 UTC m=+0.047656721 container create 00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 21 11:25:14 np0005590810 systemd[1]: Started libpod-conmon-00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40.scope.
Jan 21 11:25:14 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b427b44a38fa0e99b2cfd9144ec293f95bde2cb75fa6e8959256c85e612eef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b427b44a38fa0e99b2cfd9144ec293f95bde2cb75fa6e8959256c85e612eef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:14 np0005590810 podman[248833]: 2026-01-21 16:25:14.013152254 +0000 UTC m=+0.026449646 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:25:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b427b44a38fa0e99b2cfd9144ec293f95bde2cb75fa6e8959256c85e612eef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b427b44a38fa0e99b2cfd9144ec293f95bde2cb75fa6e8959256c85e612eef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:14 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b427b44a38fa0e99b2cfd9144ec293f95bde2cb75fa6e8959256c85e612eef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:14 np0005590810 podman[248833]: 2026-01-21 16:25:14.123677017 +0000 UTC m=+0.136974409 container init 00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:25:14 np0005590810 podman[248833]: 2026-01-21 16:25:14.131085017 +0000 UTC m=+0.144382389 container start 00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:25:14 np0005590810 podman[248833]: 2026-01-21 16:25:14.134962981 +0000 UTC m=+0.148260373 container attach 00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:25:14 np0005590810 podman[248887]: 2026-01-21 16:25:14.155485385 +0000 UTC m=+0.049475900 container create e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 21 11:25:14 np0005590810 podman[248887]: 2026-01-21 16:25:14.130821008 +0000 UTC m=+0.024811543 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 11:25:14 np0005590810 python3[248805]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 21 11:25:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:25:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:14.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:25:14 np0005590810 magical_tu[248879]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:25:14 np0005590810 magical_tu[248879]: --> All data devices are unavailable
Jan 21 11:25:14 np0005590810 systemd[1]: libpod-00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40.scope: Deactivated successfully.
Jan 21 11:25:14 np0005590810 podman[248833]: 2026-01-21 16:25:14.533717942 +0000 UTC m=+0.547015294 container died 00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tu, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:25:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:14 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4800a3f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:14 np0005590810 systemd[1]: var-lib-containers-storage-overlay-70b427b44a38fa0e99b2cfd9144ec293f95bde2cb75fa6e8959256c85e612eef-merged.mount: Deactivated successfully.
Jan 21 11:25:14 np0005590810 podman[248833]: 2026-01-21 16:25:14.583082477 +0000 UTC m=+0.596379839 container remove 00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_tu, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:25:14 np0005590810 systemd[1]: libpod-conmon-00320afacc934cc75906115c52e1029a376e6b9db43d3567f944fd94d494fc40.scope: Deactivated successfully.
Jan 21 11:25:14 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:14 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 311 B/s rd, 0 op/s
Jan 21 11:25:15 np0005590810 podman[249195]: 2026-01-21 16:25:15.243351121 +0000 UTC m=+0.048485257 container create 3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_haibt, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:25:15 np0005590810 systemd[1]: Started libpod-conmon-3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898.scope.
Jan 21 11:25:15 np0005590810 podman[249195]: 2026-01-21 16:25:15.221613309 +0000 UTC m=+0.026747495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:25:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:15 np0005590810 python3.9[249188]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:25:15 np0005590810 podman[249195]: 2026-01-21 16:25:15.339436797 +0000 UTC m=+0.144570954 container init 3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:25:15 np0005590810 podman[249195]: 2026-01-21 16:25:15.352857811 +0000 UTC m=+0.157991947 container start 3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_haibt, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 21 11:25:15 np0005590810 podman[249195]: 2026-01-21 16:25:15.356851881 +0000 UTC m=+0.161986017 container attach 3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:25:15 np0005590810 systemd[1]: libpod-3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898.scope: Deactivated successfully.
Jan 21 11:25:15 np0005590810 infallible_haibt[249211]: 167 167
Jan 21 11:25:15 np0005590810 conmon[249211]: conmon 3dddccb752511a1a5845 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898.scope/container/memory.events
Jan 21 11:25:15 np0005590810 podman[249195]: 2026-01-21 16:25:15.359633851 +0000 UTC m=+0.164768017 container died 3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_haibt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 21 11:25:15 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c026bed38015b79ac586120f46b331d3a95f757467815bfb90d85c4876997c84-merged.mount: Deactivated successfully.
Jan 21 11:25:15 np0005590810 podman[249195]: 2026-01-21 16:25:15.413589395 +0000 UTC m=+0.218723531 container remove 3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 21 11:25:15 np0005590810 systemd[1]: libpod-conmon-3dddccb752511a1a584556888a15830a0c81504d2e683a326dafb9a02ade4898.scope: Deactivated successfully.
Jan 21 11:25:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:15 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:15.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:15] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:25:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:15] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:25:15 np0005590810 podman[249259]: 2026-01-21 16:25:15.602821172 +0000 UTC m=+0.058068088 container create f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mclean, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:25:15 np0005590810 systemd[1]: Started libpod-conmon-f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50.scope.
Jan 21 11:25:15 np0005590810 podman[249259]: 2026-01-21 16:25:15.579646132 +0000 UTC m=+0.034893078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:25:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac766edb8e94d252449ce436657f3ec0880fcbdf38fb3aa6e48400dd73bb865/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac766edb8e94d252449ce436657f3ec0880fcbdf38fb3aa6e48400dd73bb865/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac766edb8e94d252449ce436657f3ec0880fcbdf38fb3aa6e48400dd73bb865/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac766edb8e94d252449ce436657f3ec0880fcbdf38fb3aa6e48400dd73bb865/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:16 np0005590810 podman[249259]: 2026-01-21 16:25:16.187501612 +0000 UTC m=+0.642748498 container init f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:25:16 np0005590810 podman[249259]: 2026-01-21 16:25:16.195521331 +0000 UTC m=+0.650768217 container start f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:25:16 np0005590810 podman[249259]: 2026-01-21 16:25:16.198630602 +0000 UTC m=+0.653877518 container attach f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:25:16 np0005590810 python3.9[249409]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:25:16 np0005590810 angry_mclean[249276]: {
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:    "0": [
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:        {
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "devices": [
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "/dev/loop3"
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            ],
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "lv_name": "ceph_lv0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "lv_size": "21470642176",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "name": "ceph_lv0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "tags": {
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.cluster_name": "ceph",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.crush_device_class": "",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.encrypted": "0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.osd_id": "0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.type": "block",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.vdo": "0",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:                "ceph.with_tpm": "0"
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            },
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "type": "block",
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:            "vg_name": "ceph_vg0"
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:        }
Jan 21 11:25:16 np0005590810 angry_mclean[249276]:    ]
Jan 21 11:25:16 np0005590810 angry_mclean[249276]: }
Jan 21 11:25:16 np0005590810 systemd[1]: libpod-f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50.scope: Deactivated successfully.
Jan 21 11:25:16 np0005590810 podman[249259]: 2026-01-21 16:25:16.53483329 +0000 UTC m=+0.990080206 container died f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mclean, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:25:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:16 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:16 np0005590810 systemd[1]: var-lib-containers-storage-overlay-fac766edb8e94d252449ce436657f3ec0880fcbdf38fb3aa6e48400dd73bb865-merged.mount: Deactivated successfully.
Jan 21 11:25:16 np0005590810 podman[249259]: 2026-01-21 16:25:16.584123443 +0000 UTC m=+1.039370339 container remove f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:25:16 np0005590810 systemd[1]: libpod-conmon-f4c75dc0f8f469cdf14a61b5cdc71641091dfeb6369049fe8ff6f5840215fb50.scope: Deactivated successfully.
Jan 21 11:25:16 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:16 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4800a3f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 311 B/s rd, 0 op/s
Jan 21 11:25:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:17.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:25:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:17.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:17 np0005590810 python3.9[249634]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769012716.5849721-3386-83895105201580/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 11:25:17 np0005590810 podman[249669]: 2026-01-21 16:25:17.247389424 +0000 UTC m=+0.050673029 container create 51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kalam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 21 11:25:17 np0005590810 systemd[1]: Started libpod-conmon-51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e.scope.
Jan 21 11:25:17 np0005590810 podman[249669]: 2026-01-21 16:25:17.226765998 +0000 UTC m=+0.030049643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:25:17 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:17 np0005590810 podman[249669]: 2026-01-21 16:25:17.346969483 +0000 UTC m=+0.150253108 container init 51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:25:17 np0005590810 podman[249669]: 2026-01-21 16:25:17.356332916 +0000 UTC m=+0.159616511 container start 51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:25:17 np0005590810 podman[249669]: 2026-01-21 16:25:17.359973563 +0000 UTC m=+0.163257168 container attach 51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kalam, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:25:17 np0005590810 systemd[1]: libpod-51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e.scope: Deactivated successfully.
Jan 21 11:25:17 np0005590810 tender_kalam[249703]: 167 167
Jan 21 11:25:17 np0005590810 conmon[249703]: conmon 51fad50eb614281b9497 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e.scope/container/memory.events
Jan 21 11:25:17 np0005590810 podman[249669]: 2026-01-21 16:25:17.365304175 +0000 UTC m=+0.168587780 container died 51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kalam, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:25:17 np0005590810 systemd[1]: var-lib-containers-storage-overlay-1486b4df3a040800f087d07a1b9b733d36508276dae1102131a6d4f2c0a5ecb4-merged.mount: Deactivated successfully.
Jan 21 11:25:17 np0005590810 podman[249669]: 2026-01-21 16:25:17.411150808 +0000 UTC m=+0.214434413 container remove 51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:25:17 np0005590810 systemd[1]: libpod-conmon-51fad50eb614281b9497628897b8bde0d9d02775b58c515962a34d5bfe46322e.scope: Deactivated successfully.
Jan 21 11:25:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:17 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:17.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:17 np0005590810 podman[249785]: 2026-01-21 16:25:17.620113732 +0000 UTC m=+0.058304425 container create bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:25:17 np0005590810 systemd[1]: Started libpod-conmon-bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91.scope.
Jan 21 11:25:17 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:17 np0005590810 podman[249785]: 2026-01-21 16:25:17.598046539 +0000 UTC m=+0.036237272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:25:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443d48d360646b2664f9128f99864628a2a9bc2d8dd9fd3528acb05b0f3c05cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443d48d360646b2664f9128f99864628a2a9bc2d8dd9fd3528acb05b0f3c05cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443d48d360646b2664f9128f99864628a2a9bc2d8dd9fd3528acb05b0f3c05cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:17 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443d48d360646b2664f9128f99864628a2a9bc2d8dd9fd3528acb05b0f3c05cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:17 np0005590810 podman[249785]: 2026-01-21 16:25:17.725105057 +0000 UTC m=+0.163295820 container init bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcnulty, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 21 11:25:17 np0005590810 podman[249785]: 2026-01-21 16:25:17.735583335 +0000 UTC m=+0.173774028 container start bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:25:17 np0005590810 podman[249785]: 2026-01-21 16:25:17.739635046 +0000 UTC m=+0.177825719 container attach bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:25:17 np0005590810 python3.9[249779]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 11:25:17 np0005590810 systemd[1]: Reloading.
Jan 21 11:25:17 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:25:17 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:25:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:18.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:18 np0005590810 lvm[249981]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:25:18 np0005590810 lvm[249981]: VG ceph_vg0 finished
Jan 21 11:25:18 np0005590810 priceless_mcnulty[249801]: {}
Jan 21 11:25:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:18 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:18 np0005590810 systemd[1]: libpod-bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91.scope: Deactivated successfully.
Jan 21 11:25:18 np0005590810 systemd[1]: libpod-bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91.scope: Consumed 1.300s CPU time.
Jan 21 11:25:18 np0005590810 podman[249785]: 2026-01-21 16:25:18.567999254 +0000 UTC m=+1.006189947 container died bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcnulty, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:25:18 np0005590810 systemd[1]: var-lib-containers-storage-overlay-443d48d360646b2664f9128f99864628a2a9bc2d8dd9fd3528acb05b0f3c05cb-merged.mount: Deactivated successfully.
Jan 21 11:25:18 np0005590810 podman[249785]: 2026-01-21 16:25:18.620665226 +0000 UTC m=+1.058855899 container remove bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:25:18 np0005590810 systemd[1]: libpod-conmon-bca5c18f60a2190f02790805fd4f3737803295033f7c982f8376797af1ca7a91.scope: Deactivated successfully.
Jan 21 11:25:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:25:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:18 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:25:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:18 np0005590810 python3.9[249987]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 11:25:18 np0005590810 systemd[1]: Reloading.
Jan 21 11:25:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:18.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:18 np0005590810 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 11:25:18 np0005590810 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 11:25:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 311 B/s rd, 0 op/s
Jan 21 11:25:19 np0005590810 systemd[1]: Starting nova_compute container...
Jan 21 11:25:19 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:19 np0005590810 podman[250068]: 2026-01-21 16:25:19.378172833 +0000 UTC m=+0.126393756 container init e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:25:19 np0005590810 podman[250068]: 2026-01-21 16:25:19.38610772 +0000 UTC m=+0.134328583 container start e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:25:19 np0005590810 podman[250068]: nova_compute
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + sudo -E kolla_set_configs
Jan 21 11:25:19 np0005590810 systemd[1]: Started nova_compute container.
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Validating config file
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying service configuration files
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Deleting /etc/ceph
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Creating directory /etc/ceph
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/ceph
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Writing out command to execute
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 11:25:19 np0005590810 nova_compute[250083]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 11:25:19 np0005590810 nova_compute[250083]: ++ cat /run_command
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + CMD=nova-compute
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + ARGS=
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + sudo kolla_copy_cacerts
Jan 21 11:25:19 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:19 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4800a3f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + [[ ! -n '' ]]
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + . kolla_extend_start
Jan 21 11:25:19 np0005590810 nova_compute[250083]: Running command: 'nova-compute'
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + echo 'Running command: '\''nova-compute'\'''
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + umask 0022
Jan 21 11:25:19 np0005590810 nova_compute[250083]: + exec nova-compute
Jan 21 11:25:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:19.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:20 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:20 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:25:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:20 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d40002520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:20 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:20 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d2c0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:20 np0005590810 python3.9[250270]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:25:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 311 B/s rd, 0 op/s
Jan 21 11:25:21 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:21 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4c0032d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:25:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:21.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:25:21 np0005590810 python3.9[250422]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:25:21 np0005590810 nova_compute[250083]: 2026-01-21 16:25:21.804 250087 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 11:25:21 np0005590810 nova_compute[250083]: 2026-01-21 16:25:21.805 250087 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 11:25:21 np0005590810 nova_compute[250083]: 2026-01-21 16:25:21.805 250087 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 11:25:21 np0005590810 nova_compute[250083]: 2026-01-21 16:25:21.805 250087 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 21 11:25:21 np0005590810 nova_compute[250083]: 2026-01-21 16:25:21.966 250087 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.000 250087 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.001 250087 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 21 11:25:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:25:22.014 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:25:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:25:22.015 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:25:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:25:22.015 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:25:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:22 np0005590810 kernel: ganesha.nfsd[247066]: segfault at 50 ip 00007f1dd2b5c32e sp 00007f1d5e7fb210 error 4 in libntirpc.so.5.8[7f1dd2b41000+2c000] likely on CPU 6 (core 0, socket 6)
Jan 21 11:25:22 np0005590810 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 21 11:25:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[244220]: 21/01/2026 16:25:22 : epoch 6970fdc4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1d4800a3f0 fd 42 proxy ignored for local
Jan 21 11:25:22 np0005590810 systemd[1]: Started Process Core Dump (PID 250579/UID 0).
Jan 21 11:25:22 np0005590810 python3.9[250576]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.715 250087 INFO nova.virt.driver [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.854 250087 INFO nova.compute.provider_config [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.876 250087 DEBUG oslo_concurrency.lockutils [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.876 250087 DEBUG oslo_concurrency.lockutils [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.876 250087 DEBUG oslo_concurrency.lockutils [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.877 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.877 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.877 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.877 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.877 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.877 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.877 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.878 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.878 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.878 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.878 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.878 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.878 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.878 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.879 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.879 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.879 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.879 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.879 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.879 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.880 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.880 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.880 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.880 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.880 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.880 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.880 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.881 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.881 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.881 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.881 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.881 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.881 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.882 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.882 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.882 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.882 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.882 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.882 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.883 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.883 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.883 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.883 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.883 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.883 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.884 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.884 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.884 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.884 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.884 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.884 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.884 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.885 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.885 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.885 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.885 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.885 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.885 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.885 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.885 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.886 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.886 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.886 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.886 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.886 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.886 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.886 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.886 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.887 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.887 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.887 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.887 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.887 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.887 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.887 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.887 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.888 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.888 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.888 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.888 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.888 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.888 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.888 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.889 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.889 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.889 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.889 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.889 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.889 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.889 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.890 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.890 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.890 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.890 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.890 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.890 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.890 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.890 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.891 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.891 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.891 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.891 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.891 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.891 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.892 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.892 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.892 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.892 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.892 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.892 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.892 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.892 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.893 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.893 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.893 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.893 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.893 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.893 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.893 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.894 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.894 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.894 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.894 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.894 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.894 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.894 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.894 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.895 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.895 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.895 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.895 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.895 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.895 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.895 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.895 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.896 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.896 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.896 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.896 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.896 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.896 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.896 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.897 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.897 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.897 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.897 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.897 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.897 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.897 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.897 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.898 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.898 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.898 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.898 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.898 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.898 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.898 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.899 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.899 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.899 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.899 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.899 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.899 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.899 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.900 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.900 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.900 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.900 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.900 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.900 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.901 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.901 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.901 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.901 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.901 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.901 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.901 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.902 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.902 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.902 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.902 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.902 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.902 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.902 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.903 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.903 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.903 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.903 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.903 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.903 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.903 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.904 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.904 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.904 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.904 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.904 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.904 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.904 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.904 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.905 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.905 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.905 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.905 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.905 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.905 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.905 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.906 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.906 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.906 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.906 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.906 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.906 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.906 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.906 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.907 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.907 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.907 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.907 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.907 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.907 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.907 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.908 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.908 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.908 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.908 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.908 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.908 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.908 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.908 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.909 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.909 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.909 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.909 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.909 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.909 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.909 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.910 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.910 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.910 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.910 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.910 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.910 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.910 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.911 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.911 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.911 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.911 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.911 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.911 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.911 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.911 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.912 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.912 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.912 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.912 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.912 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.912 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.912 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.913 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.913 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.913 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.913 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.913 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.913 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.913 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.913 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.914 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.914 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.914 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.914 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.914 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.914 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.914 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.915 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.915 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.915 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.915 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.915 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.915 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.915 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.916 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.916 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.916 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.916 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.916 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.916 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.916 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.916 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.917 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.917 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.917 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.917 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.917 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.917 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.917 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.918 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.918 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.918 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.918 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.918 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.918 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.918 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.919 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.919 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.919 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.919 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.919 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.919 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.919 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.919 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.920 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.920 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.920 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.920 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.920 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.920 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.920 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.921 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.921 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.921 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.921 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.921 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.921 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.921 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.922 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.922 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.922 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.922 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.922 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.922 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.922 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.922 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.923 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.923 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.923 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.923 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.923 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.923 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.923 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.924 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.924 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.924 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.924 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.924 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.924 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.924 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.924 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.925 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.925 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.925 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.925 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.925 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.925 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.925 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.926 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.926 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.926 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.926 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.926 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.926 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.927 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.927 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.927 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.927 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.927 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.927 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.927 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.928 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.928 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.928 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.928 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.928 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.928 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.928 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.928 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.929 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.929 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.929 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.929 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.929 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.929 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.929 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.930 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.930 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.930 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.930 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.930 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.930 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.930 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.931 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.931 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.931 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.931 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.931 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.931 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.931 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.931 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.932 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.932 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.932 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.932 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.932 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.932 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.932 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.933 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.933 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.933 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.933 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.933 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.933 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.933 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.933 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.934 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.934 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.934 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.934 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.934 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.935 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.935 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.935 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.935 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.935 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.935 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.935 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.936 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.936 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.936 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.936 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.936 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.936 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.936 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.936 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.937 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.937 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.937 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.937 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.937 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.937 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.937 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.937 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.938 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.938 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.938 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.938 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.938 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.938 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.939 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.939 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.939 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.939 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.939 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.939 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.939 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.940 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.940 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.940 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.940 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.940 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.940 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.940 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.941 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.941 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.941 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.941 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.941 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.941 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.941 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.941 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.942 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.942 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.942 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.942 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.942 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.942 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.942 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.943 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.943 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.943 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.943 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.943 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.943 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.943 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.944 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.944 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.944 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.944 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.944 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.944 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.944 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.944 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.945 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.945 250087 WARNING oslo_config.cfg [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 21 11:25:22 np0005590810 nova_compute[250083]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 21 11:25:22 np0005590810 nova_compute[250083]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 21 11:25:22 np0005590810 nova_compute[250083]: and ``live_migration_inbound_addr`` respectively.
Jan 21 11:25:22 np0005590810 nova_compute[250083]: ).  Its value may be silently ignored in the future.#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.945 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.945 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.945 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.945 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.946 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.946 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.946 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.946 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.946 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.946 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.946 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.947 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.947 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.947 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.947 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.947 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.947 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.947 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.948 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rbd_secret_uuid        = d9745984-fea8-5195-8ec5-61f685b5c785 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.948 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.948 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.948 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.948 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.948 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.948 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.949 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.949 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.949 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.949 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.949 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.949 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.949 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.950 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.950 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.950 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.950 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.950 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.950 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.950 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.951 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.951 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.951 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.951 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.951 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.951 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.951 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.952 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.952 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.952 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.952 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.952 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.952 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.952 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.953 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.953 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.953 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.953 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.953 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.953 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.953 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.953 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.954 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.954 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.954 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.954 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.954 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.954 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.954 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.955 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.955 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.955 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.955 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.955 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.955 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.955 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.956 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.956 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.956 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.956 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.956 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.956 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.956 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.957 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.957 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.957 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.957 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.957 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.957 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.957 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.958 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.958 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.958 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.958 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.958 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.958 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.958 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.959 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.959 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.959 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.959 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.959 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.959 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.960 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.960 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.960 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.960 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.960 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.960 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.961 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.961 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.961 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.961 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.961 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.961 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.961 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.962 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.963 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.963 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.963 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.963 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.964 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.964 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.964 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.964 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.964 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.964 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.965 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.965 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.965 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.965 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.965 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.965 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.965 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.966 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.966 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.966 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.966 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.966 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.966 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.967 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.967 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.967 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.967 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.967 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.967 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.968 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.968 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.968 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.968 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.968 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.968 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.968 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.969 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.969 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.969 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.969 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.969 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.969 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.970 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.970 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.970 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.970 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.970 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.970 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.971 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.971 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.971 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.971 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.971 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.972 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.972 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.972 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.972 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.972 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.972 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.972 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.973 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.973 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.973 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.973 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.973 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.973 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.974 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.974 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.974 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.974 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.974 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.974 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.974 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.975 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.975 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.975 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.975 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.975 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.975 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.975 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.976 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.976 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.976 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.976 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.976 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.976 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.977 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.977 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.977 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.977 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.977 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.977 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.977 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.978 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.978 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.978 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.978 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.978 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.978 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.979 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.979 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.979 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.979 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.979 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.979 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.979 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.980 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.980 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.980 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.980 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.980 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.980 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.980 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.981 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.981 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.981 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.981 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.981 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.982 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.982 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.982 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.982 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.982 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.982 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.982 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.983 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.983 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.983 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.983 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.983 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.983 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.984 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.984 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.984 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.984 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.984 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.985 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.985 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.985 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.986 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.986 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.986 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.986 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.986 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.986 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.987 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.987 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.987 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.987 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.987 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.988 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.988 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.988 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.988 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.988 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.988 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.989 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.989 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.989 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.989 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.989 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.989 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.990 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.990 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.990 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.990 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.990 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.991 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.991 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.991 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.991 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.991 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.991 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.992 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.992 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.992 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.992 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.992 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.993 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.993 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.993 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.993 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.993 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.993 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.993 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.994 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.994 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.994 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.994 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.994 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.994 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.995 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.995 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.995 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.995 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.995 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.996 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.996 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.996 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.996 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.996 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.996 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.996 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.997 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.997 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.997 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.997 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.997 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.997 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.997 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.998 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.998 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.998 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.998 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.998 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.998 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.998 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.999 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.999 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.999 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.999 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.999 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:22 np0005590810 nova_compute[250083]: 2026-01-21 16:25:22.999 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.000 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.000 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.000 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.000 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.000 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.000 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.001 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.001 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.001 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.001 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.001 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.001 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.002 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.002 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.002 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.002 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.002 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.002 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.002 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.003 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.003 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.003 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.003 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.003 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.003 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.003 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.004 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.004 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.004 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.004 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.004 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.004 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.005 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.005 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.005 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.005 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.005 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.005 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.005 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.005 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.006 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.006 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.006 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.006 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.006 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.006 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.007 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.007 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.007 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.007 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.007 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.007 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.007 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.008 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.008 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.008 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.008 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.008 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.008 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.008 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.009 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.009 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.009 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.009 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.009 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.010 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.010 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.010 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.010 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.010 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.010 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.010 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.011 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.011 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.011 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.011 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.011 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.011 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.011 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.011 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.012 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.012 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.012 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.012 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.012 250087 DEBUG oslo_service.service [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.014 250087 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.145 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.145 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.146 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.146 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 21 11:25:23 np0005590810 systemd[1]: Starting libvirt QEMU daemon...
Jan 21 11:25:23 np0005590810 systemd[1]: Started libvirt QEMU daemon.
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.235 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff339cc7e80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.237 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff339cc7e80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.241 250087 INFO nova.virt.libvirt.driver [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.335 250087 WARNING nova.virt.libvirt.driver [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 21 11:25:23 np0005590810 nova_compute[250083]: 2026-01-21 16:25:23.335 250087 DEBUG nova.virt.libvirt.volume.mount [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 21 11:25:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:23.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:24 np0005590810 nova_compute[250083]: 2026-01-21 16:25:24.126 250087 INFO nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Libvirt host capabilities <capabilities>
Jan 21 11:25:24 np0005590810 nova_compute[250083]: 
Jan 21 11:25:24 np0005590810 nova_compute[250083]:  <host>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <uuid>ef0b02dd-ef52-452f-a99a-26608ae61ceb</uuid>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <cpu>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <arch>x86_64</arch>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <model>EPYC-Rome-v4</model>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <vendor>AMD</vendor>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <microcode version='16777317'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <signature family='23' model='49' stepping='0'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='x2apic'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='tsc-deadline'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='osxsave'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='hypervisor'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='tsc_adjust'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='spec-ctrl'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='stibp'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='arch-capabilities'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='ssbd'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='cmp_legacy'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='topoext'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='virt-ssbd'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='lbrv'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='tsc-scale'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='vmcb-clean'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='pause-filter'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='pfthreshold'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='svme-addr-chk'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='rdctl-no'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='skip-l1dfl-vmentry'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='mds-no'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <feature name='pschange-mc-no'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <pages unit='KiB' size='4'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <pages unit='KiB' size='2048'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <pages unit='KiB' size='1048576'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </cpu>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <power_management>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <suspend_mem/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </power_management>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <iommu support='no'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <migration_features>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <live/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <uri_transports>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:        <uri_transport>tcp</uri_transport>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:        <uri_transport>rdma</uri_transport>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      </uri_transports>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </migration_features>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <topology>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <cells num='1'>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:        <cell id='0'>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:          <memory unit='KiB'>7864316</memory>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:          <pages unit='KiB' size='2048'>0</pages>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:          <distances>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <sibling id='0' value='10'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:          </distances>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:          <cpus num='8'>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:          </cpus>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:        </cell>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      </cells>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </topology>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <cache>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </cache>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <secmodel>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <model>selinux</model>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <doi>0</doi>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </secmodel>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <secmodel>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <model>dac</model>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <doi>0</doi>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </secmodel>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:  </host>
Jan 21 11:25:24 np0005590810 nova_compute[250083]: 
Jan 21 11:25:24 np0005590810 nova_compute[250083]:  <guest>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <os_type>hvm</os_type>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <arch name='i686'>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <wordsize>32</wordsize>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <domain type='qemu'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <domain type='kvm'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </arch>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <features>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <pae/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <nonpae/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <acpi default='on' toggle='yes'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <apic default='on' toggle='no'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <cpuselection/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <deviceboot/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <disksnapshot default='on' toggle='no'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <externalSnapshot/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </features>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:  </guest>
Jan 21 11:25:24 np0005590810 nova_compute[250083]: 
Jan 21 11:25:24 np0005590810 nova_compute[250083]:  <guest>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <os_type>hvm</os_type>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <arch name='x86_64'>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <wordsize>64</wordsize>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <domain type='qemu'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <domain type='kvm'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </arch>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    <features>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <acpi default='on' toggle='yes'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <apic default='on' toggle='no'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <cpuselection/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <deviceboot/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <disksnapshot default='on' toggle='no'/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:      <externalSnapshot/>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:    </features>
Jan 21 11:25:24 np0005590810 nova_compute[250083]:  </guest>
Jan 21 11:25:24 np0005590810 nova_compute[250083]: 
Jan 21 11:25:24 np0005590810 nova_compute[250083]: </capabilities>
Jan 21 11:25:24 np0005590810 nova_compute[250083]: #033[00m
Jan 21 11:25:24 np0005590810 nova_compute[250083]: 2026-01-21 16:25:24.133 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 21 11:25:24 np0005590810 python3.9[250786]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 21 11:25:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:24 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:25:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:24.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:24 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:25:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:25:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:25:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.129 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 21 11:25:25 np0005590810 nova_compute[250083]: <domainCapabilities>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <domain>kvm</domain>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <arch>i686</arch>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <vcpu max='4096'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <iothreads supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <os supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <enum name='firmware'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <loader supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>rom</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pflash</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='readonly'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>yes</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>no</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='secure'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>no</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </loader>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </os>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <cpu>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='host-passthrough' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='hostPassthroughMigratable'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>on</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>off</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='maximum' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='maximumMigratable'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>on</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>off</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='host-model' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <vendor>AMD</vendor>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='x2apic'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='hypervisor'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='stibp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='overflow-recov'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='succor'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='lbrv'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc-scale'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='flushbyasid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='pause-filter'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='pfthreshold'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='disable' name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='custom' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='ClearwaterForest'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ddpd-u'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sha512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm3'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='ClearwaterForest-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ddpd-u'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sha512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm3'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Dhyana-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Turin'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbpb'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Turin-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbpb'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-128'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-256'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-128'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-256'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v6'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v7'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='KnightsMill'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512er'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512pf'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='KnightsMill-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512er'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512pf'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G4-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tbm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G5-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tbm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='athlon'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='athlon-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='core2duo'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='core2duo-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='coreduo'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='coreduo-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='n270'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='n270-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='phenom'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='phenom-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </cpu>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <memoryBacking supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <enum name='sourceType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>file</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>anonymous</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>memfd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </memoryBacking>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <devices>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <disk supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='diskDevice'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>disk</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>cdrom</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>floppy</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>lun</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='bus'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>fdc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>scsi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>sata</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-non-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </disk>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <graphics supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vnc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>egl-headless</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dbus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </graphics>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <video supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='modelType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vga</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>cirrus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>none</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>bochs</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ramfb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </video>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <hostdev supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='mode'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>subsystem</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='startupPolicy'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>default</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>mandatory</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>requisite</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>optional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='subsysType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pci</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>scsi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='capsType'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='pciBackend'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </hostdev>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <rng supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-non-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>random</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>egd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>builtin</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </rng>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <filesystem supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='driverType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>path</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>handle</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtiofs</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </filesystem>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <tpm supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tpm-tis</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tpm-crb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>emulator</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>external</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendVersion'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>2.0</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </tpm>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <redirdev supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='bus'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </redirdev>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <channel supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pty</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>unix</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </channel>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <crypto supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>qemu</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>builtin</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </crypto>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <interface supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>default</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>passt</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </interface>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <panic supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>isa</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>hyperv</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </panic>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <console supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>null</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pty</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dev</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>file</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pipe</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>stdio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>udp</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tcp</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>unix</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>qemu-vdagent</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dbus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </console>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </devices>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <features>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <gic supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <vmcoreinfo supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <genid supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <backingStoreInput supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <backup supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <async-teardown supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <s390-pv supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <ps2 supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <tdx supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <sev supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <sgx supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <hyperv supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='features'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>relaxed</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vapic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>spinlocks</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vpindex</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>runtime</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>synic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>stimer</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>reset</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vendor_id</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>frequencies</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>reenlightenment</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tlbflush</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ipi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>avic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>emsr_bitmap</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>xmm_input</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <defaults>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <spinlocks>4095</spinlocks>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <stimer_direct>on</stimer_direct>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </defaults>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </hyperv>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <launchSecurity supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </features>
Jan 21 11:25:25 np0005590810 nova_compute[250083]: </domainCapabilities>
Jan 21 11:25:25 np0005590810 nova_compute[250083]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.140 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 21 11:25:25 np0005590810 nova_compute[250083]: <domainCapabilities>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <domain>kvm</domain>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <arch>i686</arch>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <vcpu max='240'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <iothreads supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <os supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <enum name='firmware'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <loader supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>rom</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pflash</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='readonly'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>yes</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>no</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='secure'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>no</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </loader>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </os>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <cpu>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='host-passthrough' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='hostPassthroughMigratable'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>on</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>off</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='maximum' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='maximumMigratable'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>on</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>off</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='host-model' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <vendor>AMD</vendor>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='x2apic'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='hypervisor'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='stibp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='overflow-recov'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='succor'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='lbrv'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc-scale'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='flushbyasid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='pause-filter'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='pfthreshold'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='disable' name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='custom' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='ClearwaterForest'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ddpd-u'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sha512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm3'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='ClearwaterForest-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ddpd-u'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sha512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm3'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Dhyana-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Turin'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbpb'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Turin-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbpb'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-128'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-256'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-128'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-256'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v6'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v7'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='KnightsMill'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512er'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512pf'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='KnightsMill-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512er'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512pf'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G4-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tbm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G5-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tbm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='athlon'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='athlon-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='core2duo'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='core2duo-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='coreduo'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='coreduo-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='n270'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='n270-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='phenom'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='phenom-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </cpu>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <memoryBacking supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <enum name='sourceType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>file</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>anonymous</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>memfd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </memoryBacking>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <devices>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <disk supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='diskDevice'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>disk</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>cdrom</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>floppy</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>lun</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='bus'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ide</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>fdc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>scsi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>sata</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-non-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </disk>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <graphics supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vnc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>egl-headless</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dbus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </graphics>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <video supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='modelType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vga</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>cirrus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>none</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>bochs</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ramfb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </video>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <hostdev supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='mode'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>subsystem</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='startupPolicy'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>default</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>mandatory</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>requisite</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>optional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='subsysType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pci</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>scsi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='capsType'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='pciBackend'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </hostdev>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <rng supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-non-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>random</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>egd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>builtin</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </rng>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <filesystem supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='driverType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>path</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>handle</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtiofs</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </filesystem>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <tpm supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tpm-tis</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tpm-crb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>emulator</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>external</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendVersion'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>2.0</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </tpm>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <redirdev supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='bus'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </redirdev>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <channel supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pty</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>unix</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </channel>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <crypto supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>qemu</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>builtin</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </crypto>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <interface supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>default</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>passt</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </interface>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <panic supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>isa</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>hyperv</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </panic>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <console supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>null</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pty</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dev</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>file</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pipe</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>stdio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>udp</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tcp</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>unix</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>qemu-vdagent</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dbus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </console>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </devices>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <features>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <gic supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <vmcoreinfo supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <genid supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <backingStoreInput supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <backup supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <async-teardown supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <s390-pv supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <ps2 supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <tdx supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <sev supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <sgx supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <hyperv supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='features'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>relaxed</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vapic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>spinlocks</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vpindex</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>runtime</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>synic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>stimer</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>reset</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vendor_id</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>frequencies</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>reenlightenment</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tlbflush</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ipi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>avic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>emsr_bitmap</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>xmm_input</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <defaults>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <spinlocks>4095</spinlocks>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <stimer_direct>on</stimer_direct>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </defaults>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </hyperv>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <launchSecurity supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </features>
Jan 21 11:25:25 np0005590810 nova_compute[250083]: </domainCapabilities>
Jan 21 11:25:25 np0005590810 nova_compute[250083]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.193 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.197 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 21 11:25:25 np0005590810 nova_compute[250083]: <domainCapabilities>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <domain>kvm</domain>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <arch>x86_64</arch>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <vcpu max='4096'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <iothreads supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <os supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <enum name='firmware'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>efi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <loader supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>rom</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pflash</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='readonly'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>yes</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>no</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='secure'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>yes</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>no</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </loader>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </os>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <cpu>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='host-passthrough' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='hostPassthroughMigratable'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>on</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>off</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='maximum' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='maximumMigratable'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>on</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>off</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='host-model' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <vendor>AMD</vendor>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='x2apic'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='hypervisor'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='stibp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='overflow-recov'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='succor'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='lbrv'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc-scale'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='flushbyasid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='pause-filter'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='pfthreshold'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='disable' name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='custom' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='ClearwaterForest'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ddpd-u'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sha512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm3'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='ClearwaterForest-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ddpd-u'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sha512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm3'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Dhyana-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Turin'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbpb'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Turin-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbpb'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-128'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-256'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-128'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-256'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 systemd-coredump[250580]: Process 244224 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 41:#012#0  0x00007f1dd2b5c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v6'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v7'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='KnightsMill'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512er'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512pf'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='KnightsMill-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512er'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512pf'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G4-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tbm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G5-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tbm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='athlon'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='athlon-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='core2duo'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='core2duo-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='coreduo'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='coreduo-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='n270'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='n270-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='phenom'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='phenom-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </cpu>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <memoryBacking supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <enum name='sourceType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>file</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>anonymous</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>memfd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </memoryBacking>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <devices>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <disk supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='diskDevice'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>disk</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>cdrom</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>floppy</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>lun</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='bus'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>fdc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>scsi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>sata</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-non-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </disk>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <graphics supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vnc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>egl-headless</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dbus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </graphics>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <video supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='modelType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vga</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>cirrus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>none</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>bochs</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ramfb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </video>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <hostdev supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='mode'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>subsystem</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='startupPolicy'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>default</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>mandatory</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>requisite</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>optional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='subsysType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pci</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>scsi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='capsType'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='pciBackend'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </hostdev>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <rng supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-non-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>random</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>egd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>builtin</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </rng>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <filesystem supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='driverType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>path</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>handle</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtiofs</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </filesystem>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <tpm supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tpm-tis</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tpm-crb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>emulator</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>external</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendVersion'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>2.0</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </tpm>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <redirdev supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='bus'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </redirdev>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <channel supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pty</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>unix</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </channel>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <crypto supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>qemu</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>builtin</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </crypto>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <interface supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>default</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>passt</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </interface>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <panic supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>isa</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>hyperv</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </panic>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <console supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>null</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pty</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dev</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>file</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pipe</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>stdio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>udp</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tcp</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>unix</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>qemu-vdagent</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dbus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </console>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </devices>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <features>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <gic supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <vmcoreinfo supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <genid supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <backingStoreInput supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <backup supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <async-teardown supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <s390-pv supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <ps2 supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <tdx supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <sev supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <sgx supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <hyperv supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='features'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>relaxed</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vapic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>spinlocks</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vpindex</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>runtime</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>synic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>stimer</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>reset</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vendor_id</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>frequencies</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>reenlightenment</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tlbflush</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ipi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>avic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>emsr_bitmap</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>xmm_input</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <defaults>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <spinlocks>4095</spinlocks>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <stimer_direct>on</stimer_direct>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </defaults>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </hyperv>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <launchSecurity supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </features>
Jan 21 11:25:25 np0005590810 nova_compute[250083]: </domainCapabilities>
Jan 21 11:25:25 np0005590810 nova_compute[250083]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.285 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 21 11:25:25 np0005590810 nova_compute[250083]: <domainCapabilities>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <domain>kvm</domain>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <arch>x86_64</arch>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <vcpu max='240'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <iothreads supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <os supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <enum name='firmware'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <loader supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>rom</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pflash</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='readonly'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>yes</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>no</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='secure'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>no</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </loader>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </os>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <cpu>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='host-passthrough' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='hostPassthroughMigratable'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>on</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>off</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='maximum' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='maximumMigratable'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>on</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>off</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='host-model' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <vendor>AMD</vendor>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='x2apic'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='hypervisor'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='stibp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='overflow-recov'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='succor'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='lbrv'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='tsc-scale'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='flushbyasid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='pause-filter'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='pfthreshold'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <feature policy='disable' name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <mode name='custom' supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Broadwell-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='ClearwaterForest'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ddpd-u'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sha512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm3'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='ClearwaterForest-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ddpd-u'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sha512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm3'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sm4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Cooperlake-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Denverton-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Dhyana-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 systemd[1]: systemd-coredump@10-250579-0.service: Deactivated successfully.
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 systemd[1]: systemd-coredump@10-250579-0.service: Consumed 1.446s CPU time.
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Milan-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Rome-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Turin'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbpb'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-Turin-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amd-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='auto-ibrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='perfmon-v2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbpb'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='stibp-always-on'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='EPYC-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-128'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-256'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='GraniteRapids-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-128'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-256'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx10-512'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='prefetchiti'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Haswell-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v6'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Icelake-Server-v7'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='IvyBridge-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='KnightsMill'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512er'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512pf'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='KnightsMill-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512er'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512pf'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G4-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tbm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Opteron_G5-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fma4'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tbm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xop'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SapphireRapids-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='amx-tile'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-bf16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-fp16'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bitalg'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrc'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fzrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='la57'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='taa-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='SierraForest-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ifma'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cmpccxadd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fbsdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='fsrs'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ibrs-all'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='intel-psfd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='lam'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mcdt-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pbrsb-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='psdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='serialize'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vaes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Client-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='hle'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='rtm'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Skylake-Server-v5'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512bw'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512cd'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512dq'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512f'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='avx512vl'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='invpcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pcid'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='pku'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='mpx'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v2'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v3'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='core-capability'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='split-lock-detect'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='Snowridge-v4'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='cldemote'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='erms'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='gfni'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdir64b'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='movdiri'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='xsaves'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='athlon'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='athlon-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='core2duo'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='core2duo-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='coreduo'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='coreduo-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='n270'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='n270-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='ss'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='phenom'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <blockers model='phenom-v1'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnow'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <feature name='3dnowext'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </blockers>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </mode>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </cpu>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <memoryBacking supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <enum name='sourceType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>file</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>anonymous</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <value>memfd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </memoryBacking>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <devices>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <disk supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='diskDevice'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>disk</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>cdrom</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>floppy</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>lun</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='bus'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ide</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>fdc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>scsi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>sata</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-non-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </disk>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <graphics supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vnc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>egl-headless</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dbus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </graphics>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <video supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='modelType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vga</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>cirrus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>none</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>bochs</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ramfb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </video>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <hostdev supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='mode'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>subsystem</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='startupPolicy'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>default</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>mandatory</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>requisite</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>optional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='subsysType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pci</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>scsi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='capsType'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='pciBackend'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </hostdev>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <rng supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtio-non-transitional</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>random</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>egd</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>builtin</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </rng>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <filesystem supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='driverType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>path</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>handle</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>virtiofs</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </filesystem>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <tpm supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tpm-tis</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tpm-crb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>emulator</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>external</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendVersion'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>2.0</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </tpm>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <redirdev supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='bus'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>usb</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </redirdev>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <channel supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pty</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>unix</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </channel>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <crypto supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>qemu</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendModel'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>builtin</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </crypto>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <interface supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='backendType'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>default</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>passt</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </interface>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <panic supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='model'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>isa</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>hyperv</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </panic>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <console supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='type'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>null</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vc</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pty</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dev</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>file</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>pipe</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>stdio</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>udp</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tcp</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>unix</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>qemu-vdagent</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>dbus</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </console>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </devices>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  <features>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <gic supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <vmcoreinfo supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <genid supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <backingStoreInput supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <backup supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <async-teardown supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <s390-pv supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <ps2 supported='yes'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <tdx supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <sev supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <sgx supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <hyperv supported='yes'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <enum name='features'>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>relaxed</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vapic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>spinlocks</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vpindex</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>runtime</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>synic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>stimer</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>reset</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>vendor_id</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>frequencies</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>reenlightenment</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>tlbflush</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>ipi</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>avic</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>emsr_bitmap</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <value>xmm_input</value>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </enum>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      <defaults>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <spinlocks>4095</spinlocks>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <stimer_direct>on</stimer_direct>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:      </defaults>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    </hyperv>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:    <launchSecurity supported='no'/>
Jan 21 11:25:25 np0005590810 nova_compute[250083]:  </features>
Jan 21 11:25:25 np0005590810 nova_compute[250083]: </domainCapabilities>
Jan 21 11:25:25 np0005590810 nova_compute[250083]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.403 250087 DEBUG nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.403 250087 INFO nova.virt.libvirt.host [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Secure Boot support detected#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.406 250087 INFO nova.virt.libvirt.driver [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.406 250087 INFO nova.virt.libvirt.driver [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.415 250087 DEBUG nova.virt.libvirt.driver [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.440 250087 INFO nova.virt.node [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Determined node identity 2519faba-4002-49a2-b483-5098e748d2b5 from /var/lib/nova/compute_id#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.461 250087 WARNING nova.compute.manager [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Compute nodes ['2519faba-4002-49a2-b483-5098e748d2b5'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 21 11:25:25 np0005590810 podman[250982]: 2026-01-21 16:25:25.489081525 +0000 UTC m=+0.039946262 container died a1089552432e211b0702f1f3ddfbe1ea899d7b5503c7d73a6a84a6c76e76b0c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.494 250087 INFO nova.compute.manager [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 21 11:25:25 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b2936eb6c242a2c94f4e272c14500fe13a462f02fb970f0a66d50f56623c53b6-merged.mount: Deactivated successfully.
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.522 250087 WARNING nova.compute.manager [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.522 250087 DEBUG oslo_concurrency.lockutils [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.522 250087 DEBUG oslo_concurrency.lockutils [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.523 250087 DEBUG oslo_concurrency.lockutils [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.523 250087 DEBUG nova.compute.resource_tracker [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.523 250087 DEBUG oslo_concurrency.processutils [None req-b9514043-fa54-49fb-a0f9-cee4611e3e26 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:25:25 np0005590810 podman[250982]: 2026-01-21 16:25:25.53656919 +0000 UTC m=+0.087433907 container remove a1089552432e211b0702f1f3ddfbe1ea899d7b5503c7d73a6a84a6c76e76b0c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 11:25:25 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Main process exited, code=exited, status=139/n/a
Jan 21 11:25:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:25.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:25 np0005590810 python3.9[250975]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 11:25:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:25:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:25] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:25:25 np0005590810 systemd[1]: Stopping nova_compute container...
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.699 250087 DEBUG oslo_concurrency.lockutils [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.701 250087 DEBUG oslo_concurrency.lockutils [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:25:25 np0005590810 nova_compute[250083]: 2026-01-21 16:25:25.701 250087 DEBUG oslo_concurrency.lockutils [None req-724532ca-bc7b-4aaa-88ff-f9248ef341b4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:25:25 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Failed with result 'exit-code'.
Jan 21 11:25:25 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.711s CPU time.
Jan 21 11:25:26 np0005590810 systemd[1]: libpod-e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55.scope: Deactivated successfully.
Jan 21 11:25:26 np0005590810 virtqemud[250664]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 21 11:25:26 np0005590810 virtqemud[250664]: hostname: compute-0
Jan 21 11:25:26 np0005590810 virtqemud[250664]: End of file while reading data: Input/output error
Jan 21 11:25:26 np0005590810 systemd[1]: libpod-e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55.scope: Consumed 3.565s CPU time.
Jan 21 11:25:26 np0005590810 podman[251012]: 2026-01-21 16:25:26.179608337 +0000 UTC m=+0.530873272 container died e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 21 11:25:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:26.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55-userdata-shm.mount: Deactivated successfully.
Jan 21 11:25:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949-merged.mount: Deactivated successfully.
Jan 21 11:25:26 np0005590810 podman[251012]: 2026-01-21 16:25:26.602938972 +0000 UTC m=+0.954203907 container cleanup e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm)
Jan 21 11:25:26 np0005590810 podman[251012]: nova_compute
Jan 21 11:25:26 np0005590810 podman[251075]: nova_compute
Jan 21 11:25:26 np0005590810 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 21 11:25:26 np0005590810 systemd[1]: Stopped nova_compute container.
Jan 21 11:25:26 np0005590810 systemd[1]: Starting nova_compute container...
Jan 21 11:25:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:26 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb1aeed43a3c86426c68be38d884104b937bbc4ed572762a66097d586e90949/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:26 np0005590810 podman[251088]: 2026-01-21 16:25:26.870153319 +0000 UTC m=+0.125719155 container init e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 11:25:26 np0005590810 podman[251088]: 2026-01-21 16:25:26.878445967 +0000 UTC m=+0.134011773 container start e2f881eeb2c071cff91a36d9d231b563696541da0189b69f6ccac512371c0d55 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 21 11:25:26 np0005590810 podman[251088]: nova_compute
Jan 21 11:25:26 np0005590810 nova_compute[251104]: + sudo -E kolla_set_configs
Jan 21 11:25:26 np0005590810 systemd[1]: Started nova_compute container.
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Validating config file
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying service configuration files
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /etc/ceph
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Creating directory /etc/ceph
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/ceph
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Writing out command to execute
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 11:25:26 np0005590810 nova_compute[251104]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 11:25:26 np0005590810 nova_compute[251104]: ++ cat /run_command
Jan 21 11:25:26 np0005590810 nova_compute[251104]: + CMD=nova-compute
Jan 21 11:25:26 np0005590810 nova_compute[251104]: + ARGS=
Jan 21 11:25:26 np0005590810 nova_compute[251104]: + sudo kolla_copy_cacerts
Jan 21 11:25:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:25:27 np0005590810 nova_compute[251104]: Running command: 'nova-compute'
Jan 21 11:25:27 np0005590810 nova_compute[251104]: + [[ ! -n '' ]]
Jan 21 11:25:27 np0005590810 nova_compute[251104]: + . kolla_extend_start
Jan 21 11:25:27 np0005590810 nova_compute[251104]: + echo 'Running command: '\''nova-compute'\'''
Jan 21 11:25:27 np0005590810 nova_compute[251104]: + umask 0022
Jan 21 11:25:27 np0005590810 nova_compute[251104]: + exec nova-compute
Jan 21 11:25:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:27.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:27.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:28.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:28.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.165 251108 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.166 251108 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.166 251108 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.166 251108 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 21 11:25:29 np0005590810 python3.9[251272]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.338 251108 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:25:29 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:25:29 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.356 251108 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.357 251108 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 21 11:25:29 np0005590810 systemd[1]: Started libpod-conmon-fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396.scope.
Jan 21 11:25:29 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:25:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f9002524ff2b4b9c441d3a8a06e81b7fea49188c5716af14763a4b73d51a70/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f9002524ff2b4b9c441d3a8a06e81b7fea49188c5716af14763a4b73d51a70/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:29 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f9002524ff2b4b9c441d3a8a06e81b7fea49188c5716af14763a4b73d51a70/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:29 np0005590810 podman[251299]: 2026-01-21 16:25:29.535858911 +0000 UTC m=+0.121015763 container init fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:25:29 np0005590810 podman[251299]: 2026-01-21 16:25:29.545501573 +0000 UTC m=+0.130658405 container start fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm)
Jan 21 11:25:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:25:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:25:29 np0005590810 python3.9[251272]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Applying nova statedir ownership
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 21 11:25:29 np0005590810 nova_compute_init[251320]: INFO:nova_statedir:Nova statedir ownership complete
Jan 21 11:25:29 np0005590810 systemd[1]: libpod-fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396.scope: Deactivated successfully.
Jan 21 11:25:29 np0005590810 podman[251333]: 2026-01-21 16:25:29.699287155 +0000 UTC m=+0.049379538 container died fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 21 11:25:29 np0005590810 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396-userdata-shm.mount: Deactivated successfully.
Jan 21 11:25:29 np0005590810 systemd[1]: var-lib-containers-storage-overlay-08f9002524ff2b4b9c441d3a8a06e81b7fea49188c5716af14763a4b73d51a70-merged.mount: Deactivated successfully.
Jan 21 11:25:29 np0005590810 podman[251333]: 2026-01-21 16:25:29.745405015 +0000 UTC m=+0.095497388 container cleanup fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202)
Jan 21 11:25:29 np0005590810 systemd[1]: libpod-conmon-fbec749a8e20e79d1919323c96ad5981904043a92939afa066d6b1a57c8d2396.scope: Deactivated successfully.
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.786 251108 INFO nova.virt.driver [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.899 251108 INFO nova.compute.provider_config [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.909 251108 DEBUG oslo_concurrency.lockutils [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.909 251108 DEBUG oslo_concurrency.lockutils [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.910 251108 DEBUG oslo_concurrency.lockutils [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.910 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.910 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.910 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.910 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.911 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.911 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.911 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.911 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.911 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.911 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.912 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.912 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.912 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.912 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.912 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.913 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.913 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.913 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.913 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.913 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.913 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.914 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.914 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.914 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.914 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.914 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.914 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.915 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.915 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.915 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.915 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.915 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.915 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.916 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.916 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.916 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.916 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.916 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.917 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.917 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.917 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.917 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.918 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.918 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.918 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.919 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.919 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.919 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.919 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.920 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.920 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.920 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.920 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.920 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.921 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.921 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.921 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.921 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.921 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.922 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.922 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.922 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.922 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.922 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.922 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.923 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.923 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.923 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.923 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.923 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.924 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.924 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.924 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.924 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.924 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.924 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.925 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.925 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.925 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.925 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.925 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.925 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.925 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.926 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.926 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.926 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.926 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.926 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.926 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.927 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.927 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.927 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.927 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.927 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.927 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.927 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.928 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.928 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.928 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.928 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.928 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.928 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.929 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.929 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.929 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.929 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.929 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.929 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.930 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.930 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.930 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.930 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.930 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.931 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.931 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.931 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.931 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.931 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.931 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.931 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.932 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.932 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.932 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.932 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.932 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.932 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.932 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.933 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.933 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.933 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.933 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.933 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.933 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.934 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.934 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.934 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.934 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.934 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.934 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.934 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.935 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.935 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.935 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.935 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.935 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.935 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.936 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.936 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.936 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.936 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.936 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.937 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.937 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.937 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.937 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.937 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.937 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.937 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.938 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.938 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.938 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.938 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.938 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.938 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.939 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.939 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.939 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.939 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.939 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.939 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.939 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.940 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.940 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.940 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.940 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.940 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.940 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.941 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.941 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.941 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.941 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.941 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.941 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.941 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.942 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.942 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.942 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.942 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.942 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.943 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.943 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.943 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.943 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.943 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.943 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.943 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.944 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.944 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.944 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.944 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.944 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.944 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.945 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.945 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.945 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.945 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.945 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.945 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.945 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.946 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.946 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.946 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.946 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.946 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.946 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.947 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.947 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.947 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.947 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.947 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.947 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.947 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.948 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.948 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.948 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.948 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.948 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.948 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.948 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.949 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.949 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.949 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.949 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.949 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.949 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.950 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.950 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.950 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.950 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.950 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.950 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.950 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.951 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.951 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.951 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.951 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.951 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.951 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.952 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.952 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.952 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.952 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.952 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.952 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.952 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.952 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.953 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.953 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.953 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.953 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.953 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.953 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.954 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.954 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.954 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.954 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.954 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.954 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.955 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.955 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.955 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.955 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.955 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.955 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.956 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.956 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.956 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.956 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.956 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.956 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.957 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.957 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.957 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.957 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.957 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.957 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.957 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.958 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.958 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.958 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.958 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.958 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.958 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.959 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.959 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.959 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.959 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.959 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.959 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.960 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.960 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.960 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.960 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.960 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.960 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.960 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.961 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.961 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.961 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.961 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.961 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.961 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.962 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.962 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.962 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.962 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.962 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.963 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.963 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.963 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.963 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.963 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.963 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.964 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.964 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.964 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.964 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.965 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.965 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.965 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.965 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.965 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.965 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.965 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.966 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.966 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.966 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.966 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.966 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.966 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.966 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.967 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.967 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.967 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.967 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.967 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.967 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.968 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.968 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.968 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.968 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.968 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.968 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.969 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.969 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.969 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.969 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.970 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.970 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.970 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.970 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.970 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.970 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.970 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.971 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.971 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.971 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.971 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.971 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.971 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.972 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.972 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.972 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.972 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.972 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.972 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.972 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.973 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.973 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.973 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.973 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.973 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.973 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.973 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.974 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.974 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.974 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.974 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.974 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.974 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.975 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.975 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.975 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.975 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.975 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.975 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.976 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.976 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.976 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.976 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.976 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.976 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.976 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.977 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.977 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.977 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.977 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.977 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.977 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.977 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.978 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.978 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.978 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.978 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.978 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.978 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.978 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.979 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.979 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.979 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.979 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.979 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.979 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.979 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.980 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.980 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.980 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.980 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.980 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.980 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.980 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.981 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.981 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.981 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.981 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.981 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.981 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.981 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.982 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.982 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.982 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.982 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.982 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.982 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.982 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.983 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.983 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.983 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.983 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.983 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.983 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.983 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.984 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.984 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.984 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.984 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.984 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.984 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.984 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.985 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.985 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.985 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.985 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.985 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.985 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.985 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.986 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.986 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.986 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.986 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.986 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.986 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.987 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.987 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.987 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.987 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.987 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.987 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.987 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.988 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.988 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.988 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.988 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.988 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.988 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.988 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.989 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.989 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.989 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.989 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.989 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.989 251108 WARNING oslo_config.cfg [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 21 11:25:29 np0005590810 nova_compute[251104]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 21 11:25:29 np0005590810 nova_compute[251104]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 21 11:25:29 np0005590810 nova_compute[251104]: and ``live_migration_inbound_addr`` respectively.
Jan 21 11:25:29 np0005590810 nova_compute[251104]: ).  Its value may be silently ignored in the future.#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.990 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.990 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.990 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.990 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.990 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.991 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.991 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.991 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.991 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.991 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.991 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.991 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.992 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.992 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.992 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.992 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.992 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.992 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.992 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rbd_secret_uuid        = d9745984-fea8-5195-8ec5-61f685b5c785 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.993 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.993 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.993 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.993 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.993 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.993 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.993 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.994 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.994 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.994 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.994 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.994 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.994 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.995 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.995 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.995 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.995 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.995 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.995 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.995 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.996 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.996 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.996 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.996 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.996 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.996 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.996 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.997 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.997 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.997 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.997 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.997 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.997 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.998 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.998 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.998 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.998 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.998 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.998 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.998 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.999 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.999 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.999 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.999 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.999 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:29 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.999 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:29.999 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.000 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.000 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.000 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.000 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.000 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.000 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.000 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.001 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.001 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.001 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.001 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.001 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.001 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.001 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.001 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.002 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.002 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.002 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.002 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.002 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.002 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.003 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.003 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.003 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.003 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.003 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.003 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.004 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.004 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.004 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.004 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.004 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.004 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.005 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.005 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.005 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.005 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.005 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.005 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.006 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.006 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.006 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.006 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.006 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.006 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.007 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.007 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.007 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.007 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.007 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.007 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.007 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.007 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.008 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.008 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.008 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.008 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.008 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.008 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.009 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.009 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.009 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.009 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.009 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.009 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.010 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.010 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.010 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.010 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.010 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.010 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.010 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.011 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.011 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.011 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.011 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.011 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.012 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.012 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.012 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.012 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.012 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.012 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.012 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.013 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.013 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.013 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.013 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.013 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.013 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.014 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.014 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.014 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.014 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.014 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.014 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.014 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.015 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.015 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.015 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.015 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.015 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.015 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.016 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.016 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.016 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.016 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.016 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.016 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.016 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.017 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.017 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.017 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.017 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.017 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.017 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.018 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.018 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.018 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.018 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.018 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.018 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.019 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.019 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.019 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.019 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.019 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.019 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.020 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.020 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.020 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.020 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.020 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.020 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.021 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.021 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.021 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.021 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.021 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.021 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.021 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.022 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.022 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.022 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.022 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.022 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.022 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.022 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.023 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.023 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.023 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.023 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.023 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.023 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.023 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.024 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.024 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.024 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.024 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.024 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.024 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.024 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.025 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.025 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.025 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.025 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.025 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.025 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.025 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.026 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.026 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.026 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.026 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.026 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.026 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.026 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.027 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.027 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.027 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.027 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.027 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.027 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.028 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.028 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.028 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.028 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.028 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.029 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.029 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.029 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.029 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.029 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.029 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.030 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.030 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.030 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.030 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.030 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.030 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.031 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.031 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.031 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.031 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.031 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.031 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.031 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.032 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.032 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.032 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.032 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.032 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.032 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.033 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.033 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.033 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.033 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.033 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.033 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.033 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.034 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.034 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.034 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.034 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.034 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.034 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.035 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.035 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.035 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.035 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.035 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.035 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.035 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.036 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.036 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.036 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.036 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.036 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.036 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.037 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.037 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.037 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.037 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.037 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.037 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.038 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.038 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.038 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.038 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.038 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.038 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.038 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.039 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.039 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.039 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.039 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.039 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.039 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.039 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.040 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.040 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.040 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.040 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.040 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.040 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.040 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.041 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.041 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.041 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.041 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.041 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.041 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.041 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.042 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.042 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.042 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.042 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.042 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.042 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.043 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.043 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.043 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.043 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.043 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.043 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.043 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.044 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.044 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.044 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.044 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.044 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.044 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.045 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.045 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.045 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.045 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.045 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.045 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.045 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.046 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.046 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.046 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.046 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.046 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.046 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.047 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.047 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.047 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.047 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.047 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.047 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.047 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.048 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.048 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.048 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.048 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.048 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.048 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.049 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.049 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.049 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.049 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.049 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.049 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.050 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.050 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.050 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.050 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.050 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.050 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.051 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.051 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.051 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.051 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.051 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.052 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.052 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.052 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.052 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.052 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.052 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.053 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.053 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.053 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.053 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.053 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.053 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.053 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.054 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.054 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.054 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.054 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.054 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.054 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.055 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.055 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.055 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.055 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.055 251108 DEBUG oslo_service.service [None req-1cc710ed-315f-414d-981a-5546c19c4f7a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.056 251108 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.073 251108 INFO nova.virt.node [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Determined node identity 2519faba-4002-49a2-b483-5098e748d2b5 from /var/lib/nova/compute_id#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.074 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.075 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.075 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.075 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.093 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa2046ae220> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.097 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa2046ae220> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.098 251108 INFO nova.virt.libvirt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.104 251108 INFO nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Libvirt host capabilities <capabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <host>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <uuid>ef0b02dd-ef52-452f-a99a-26608ae61ceb</uuid>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <arch>x86_64</arch>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model>EPYC-Rome-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <vendor>AMD</vendor>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <microcode version='16777317'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <signature family='23' model='49' stepping='0'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='x2apic'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='tsc-deadline'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='osxsave'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='hypervisor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='tsc_adjust'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='spec-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='stibp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='arch-capabilities'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='cmp_legacy'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='topoext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='virt-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='lbrv'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='tsc-scale'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='vmcb-clean'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='pause-filter'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='pfthreshold'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='svme-addr-chk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='rdctl-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='skip-l1dfl-vmentry'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='mds-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature name='pschange-mc-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <pages unit='KiB' size='4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <pages unit='KiB' size='2048'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <pages unit='KiB' size='1048576'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <power_management>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <suspend_mem/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </power_management>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <iommu support='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <migration_features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <live/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <uri_transports>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <uri_transport>tcp</uri_transport>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <uri_transport>rdma</uri_transport>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </uri_transports>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </migration_features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <topology>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <cells num='1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <cell id='0'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:          <memory unit='KiB'>7864316</memory>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:          <pages unit='KiB' size='2048'>0</pages>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:          <distances>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <sibling id='0' value='10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:          </distances>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:          <cpus num='8'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:          </cpus>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        </cell>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </cells>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </topology>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <cache>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </cache>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <secmodel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model>selinux</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <doi>0</doi>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </secmodel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <secmodel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model>dac</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <doi>0</doi>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </secmodel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </host>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <guest>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <os_type>hvm</os_type>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <arch name='i686'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <wordsize>32</wordsize>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <domain type='qemu'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <domain type='kvm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </arch>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <pae/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <nonpae/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <acpi default='on' toggle='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <apic default='on' toggle='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <cpuselection/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <deviceboot/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <disksnapshot default='on' toggle='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <externalSnapshot/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </guest>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <guest>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <os_type>hvm</os_type>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <arch name='x86_64'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <wordsize>64</wordsize>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <domain type='qemu'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <domain type='kvm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </arch>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <acpi default='on' toggle='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <apic default='on' toggle='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <cpuselection/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <deviceboot/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <disksnapshot default='on' toggle='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <externalSnapshot/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </guest>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 
Jan 21 11:25:30 np0005590810 nova_compute[251104]: </capabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: #033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.113 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.118 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 21 11:25:30 np0005590810 nova_compute[251104]: <domainCapabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <domain>kvm</domain>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <arch>i686</arch>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <vcpu max='240'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <iothreads supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <os supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <enum name='firmware'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <loader supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>rom</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pflash</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='readonly'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>yes</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>no</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='secure'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>no</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </loader>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </os>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='host-passthrough' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='hostPassthroughMigratable'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>on</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>off</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='maximum' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='maximumMigratable'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>on</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>off</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='host-model' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <vendor>AMD</vendor>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='x2apic'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='hypervisor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='stibp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='overflow-recov'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='succor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='lbrv'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc-scale'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='flushbyasid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='pause-filter'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='pfthreshold'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='disable' name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='custom' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='ClearwaterForest'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ddpd-u'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sha512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='ClearwaterForest-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ddpd-u'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sha512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Dhyana-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Turin'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbpb'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Turin-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbpb'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-128'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-256'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-128'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-256'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v6'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v7'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='KnightsMill'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512er'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512pf'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='KnightsMill-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512er'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512pf'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G4-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tbm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G5-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tbm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='athlon'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='athlon-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='core2duo'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='core2duo-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='coreduo'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='coreduo-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='n270'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='n270-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='phenom'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='phenom-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <memoryBacking supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <enum name='sourceType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>file</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>anonymous</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>memfd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </memoryBacking>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <devices>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <disk supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='diskDevice'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>disk</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>cdrom</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>floppy</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>lun</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='bus'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ide</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>fdc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>scsi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>sata</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-non-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <graphics supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vnc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>egl-headless</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dbus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </graphics>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <video supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='modelType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vga</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>cirrus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>none</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>bochs</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ramfb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </video>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <hostdev supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='mode'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>subsystem</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='startupPolicy'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>default</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>mandatory</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>requisite</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>optional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='subsysType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pci</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>scsi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='capsType'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='pciBackend'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </hostdev>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <rng supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-non-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>random</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>egd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>builtin</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </rng>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <filesystem supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='driverType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>path</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>handle</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtiofs</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </filesystem>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <tpm supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tpm-tis</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tpm-crb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>emulator</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>external</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendVersion'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>2.0</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </tpm>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <redirdev supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='bus'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </redirdev>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <channel supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pty</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>unix</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </channel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <crypto supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>qemu</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>builtin</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </crypto>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <interface supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>default</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>passt</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </interface>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <panic supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>isa</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>hyperv</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </panic>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <console supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>null</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pty</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dev</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>file</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pipe</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>stdio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>udp</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tcp</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>unix</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>qemu-vdagent</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dbus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </console>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </devices>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <gic supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <vmcoreinfo supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <genid supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <backingStoreInput supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <backup supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <async-teardown supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <s390-pv supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <ps2 supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <tdx supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <sev supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <sgx supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <hyperv supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='features'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>relaxed</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vapic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>spinlocks</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vpindex</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>runtime</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>synic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>stimer</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>reset</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vendor_id</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>frequencies</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>reenlightenment</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tlbflush</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ipi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>avic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>emsr_bitmap</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>xmm_input</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <defaults>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <spinlocks>4095</spinlocks>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <stimer_direct>on</stimer_direct>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </defaults>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </hyperv>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <launchSecurity supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: </domainCapabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.129 251108 DEBUG nova.virt.libvirt.volume.mount [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.134 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 21 11:25:30 np0005590810 nova_compute[251104]: <domainCapabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <domain>kvm</domain>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <arch>i686</arch>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <vcpu max='4096'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <iothreads supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <os supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <enum name='firmware'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <loader supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>rom</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pflash</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='readonly'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>yes</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>no</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='secure'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>no</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </loader>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </os>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='host-passthrough' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='hostPassthroughMigratable'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>on</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>off</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='maximum' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='maximumMigratable'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>on</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>off</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='host-model' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <vendor>AMD</vendor>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='x2apic'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='hypervisor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='stibp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='overflow-recov'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='succor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='lbrv'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc-scale'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='flushbyasid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='pause-filter'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='pfthreshold'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='disable' name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='custom' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='ClearwaterForest'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ddpd-u'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sha512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='ClearwaterForest-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ddpd-u'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sha512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Dhyana-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:30.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Turin'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbpb'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Turin-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbpb'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-128'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-256'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-128'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-256'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v6'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v7'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='KnightsMill'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512er'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512pf'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='KnightsMill-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512er'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512pf'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G4-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tbm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G5-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tbm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='athlon'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='athlon-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='core2duo'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='core2duo-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='coreduo'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='coreduo-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='n270'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='n270-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='phenom'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='phenom-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <memoryBacking supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <enum name='sourceType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>file</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>anonymous</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>memfd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </memoryBacking>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <devices>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <disk supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='diskDevice'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>disk</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>cdrom</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>floppy</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>lun</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='bus'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>fdc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>scsi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>sata</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-non-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <graphics supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vnc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>egl-headless</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dbus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </graphics>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <video supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='modelType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vga</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>cirrus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>none</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>bochs</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ramfb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </video>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <hostdev supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='mode'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>subsystem</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='startupPolicy'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>default</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>mandatory</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>requisite</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>optional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='subsysType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pci</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>scsi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='capsType'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='pciBackend'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </hostdev>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <rng supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-non-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>random</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>egd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>builtin</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </rng>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <filesystem supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='driverType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>path</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>handle</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtiofs</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </filesystem>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <tpm supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tpm-tis</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tpm-crb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>emulator</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>external</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendVersion'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>2.0</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </tpm>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <redirdev supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='bus'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </redirdev>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <channel supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pty</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>unix</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </channel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <crypto supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>qemu</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>builtin</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </crypto>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <interface supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>default</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>passt</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </interface>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <panic supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>isa</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>hyperv</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </panic>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <console supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>null</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pty</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dev</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>file</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pipe</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>stdio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>udp</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tcp</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>unix</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>qemu-vdagent</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dbus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </console>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </devices>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <gic supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <vmcoreinfo supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <genid supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <backingStoreInput supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <backup supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <async-teardown supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <s390-pv supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <ps2 supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <tdx supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <sev supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <sgx supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <hyperv supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='features'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>relaxed</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vapic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>spinlocks</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vpindex</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>runtime</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>synic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>stimer</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>reset</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vendor_id</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>frequencies</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>reenlightenment</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tlbflush</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ipi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>avic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>emsr_bitmap</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>xmm_input</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <defaults>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <spinlocks>4095</spinlocks>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <stimer_direct>on</stimer_direct>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </defaults>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </hyperv>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <launchSecurity supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: </domainCapabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.186 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.194 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 21 11:25:30 np0005590810 nova_compute[251104]: <domainCapabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <domain>kvm</domain>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <arch>x86_64</arch>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <vcpu max='240'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <iothreads supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <os supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <enum name='firmware'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <loader supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>rom</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pflash</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='readonly'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>yes</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>no</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='secure'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>no</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </loader>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </os>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='host-passthrough' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='hostPassthroughMigratable'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>on</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>off</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='maximum' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='maximumMigratable'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>on</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>off</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='host-model' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <vendor>AMD</vendor>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='x2apic'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='hypervisor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='stibp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='overflow-recov'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='succor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='lbrv'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc-scale'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='flushbyasid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='pause-filter'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='pfthreshold'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='disable' name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='custom' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='ClearwaterForest'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ddpd-u'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sha512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='ClearwaterForest-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ddpd-u'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sha512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Dhyana-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Turin'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbpb'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Turin-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbpb'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-128'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-256'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-128'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-256'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v6'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v7'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='KnightsMill'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512er'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512pf'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='KnightsMill-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512er'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512pf'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G4-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tbm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G5-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tbm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='athlon'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='athlon-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='core2duo'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='core2duo-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='coreduo'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='coreduo-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='n270'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='n270-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='phenom'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='phenom-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <memoryBacking supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <enum name='sourceType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>file</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>anonymous</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>memfd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </memoryBacking>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <devices>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <disk supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='diskDevice'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>disk</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>cdrom</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>floppy</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>lun</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='bus'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ide</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>fdc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>scsi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>sata</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-non-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <graphics supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vnc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>egl-headless</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dbus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </graphics>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <video supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='modelType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vga</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>cirrus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>none</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>bochs</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ramfb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </video>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <hostdev supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='mode'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>subsystem</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='startupPolicy'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>default</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>mandatory</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>requisite</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>optional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='subsysType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pci</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>scsi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='capsType'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='pciBackend'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </hostdev>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <rng supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-non-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>random</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>egd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>builtin</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </rng>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <filesystem supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='driverType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>path</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>handle</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtiofs</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </filesystem>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <tpm supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tpm-tis</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tpm-crb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>emulator</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>external</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendVersion'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>2.0</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </tpm>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <redirdev supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='bus'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </redirdev>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <channel supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pty</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>unix</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </channel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <crypto supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>qemu</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>builtin</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </crypto>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <interface supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>default</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>passt</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </interface>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <panic supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>isa</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>hyperv</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </panic>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <console supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>null</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pty</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dev</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>file</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pipe</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>stdio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>udp</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tcp</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>unix</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>qemu-vdagent</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dbus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </console>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </devices>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <gic supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <vmcoreinfo supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <genid supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <backingStoreInput supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <backup supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <async-teardown supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <s390-pv supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <ps2 supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <tdx supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <sev supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <sgx supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <hyperv supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='features'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>relaxed</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vapic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>spinlocks</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vpindex</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>runtime</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>synic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>stimer</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>reset</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vendor_id</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>frequencies</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>reenlightenment</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tlbflush</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ipi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>avic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>emsr_bitmap</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>xmm_input</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <defaults>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <spinlocks>4095</spinlocks>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <stimer_direct>on</stimer_direct>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </defaults>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </hyperv>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <launchSecurity supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: </domainCapabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.286 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 21 11:25:30 np0005590810 nova_compute[251104]: <domainCapabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <domain>kvm</domain>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <arch>x86_64</arch>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <vcpu max='4096'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <iothreads supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <os supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <enum name='firmware'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>efi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <loader supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>rom</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pflash</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='readonly'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>yes</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>no</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='secure'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>yes</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>no</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </loader>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </os>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='host-passthrough' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='hostPassthroughMigratable'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>on</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>off</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='maximum' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='maximumMigratable'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>on</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>off</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='host-model' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <vendor>AMD</vendor>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='x2apic'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='hypervisor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='stibp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='overflow-recov'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='succor'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='lbrv'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='tsc-scale'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='flushbyasid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='pause-filter'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='pfthreshold'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <feature policy='disable' name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <mode name='custom' supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Broadwell-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='ClearwaterForest'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ddpd-u'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sha512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='ClearwaterForest-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ddpd-u'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sha512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm3'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sm4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Cooperlake-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Denverton-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Dhyana-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Milan-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Rome-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Turin'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbpb'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-Turin-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amd-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='auto-ibrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vp2intersect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fs-gs-base-ns'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibpb-brtype'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='no-nested-data-bp'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='null-sel-clr-base'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='perfmon-v2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbpb'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='srso-user-kernel-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='stibp-always-on'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='EPYC-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-128'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-256'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='GraniteRapids-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-128'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-256'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx10-512'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='prefetchiti'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Haswell-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v6'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Icelake-Server-v7'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='IvyBridge-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='KnightsMill'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512er'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512pf'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='KnightsMill-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4fmaps'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-4vnniw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512er'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512pf'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G4-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tbm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Opteron_G5-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fma4'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tbm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xop'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SapphireRapids-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='amx-tile'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-bf16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-fp16'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512-vpopcntdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bitalg'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vbmi2'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrc'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fzrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='la57'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='taa-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='tsx-ldtrk'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='SierraForest-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ifma'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-ne-convert'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx-vnni-int8'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bhi-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='bus-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cmpccxadd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fbsdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='fsrs'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ibrs-all'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='intel-psfd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ipred-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='lam'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mcdt-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pbrsb-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='psdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rrsba-ctrl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='sbdr-ssdp-no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='serialize'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vaes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='vpclmulqdq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Client-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='hle'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='rtm'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Skylake-Server-v5'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512bw'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512cd'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512dq'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512f'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='avx512vl'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='invpcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pcid'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='pku'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='mpx'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v2'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v3'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='core-capability'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='split-lock-detect'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='Snowridge-v4'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='cldemote'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='erms'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='gfni'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdir64b'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='movdiri'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='xsaves'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='athlon'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='athlon-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='core2duo'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='core2duo-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='coreduo'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='coreduo-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='n270'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='n270-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='ss'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='phenom'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <blockers model='phenom-v1'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnow'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <feature name='3dnowext'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </blockers>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </mode>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </cpu>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <memoryBacking supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <enum name='sourceType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>file</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>anonymous</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <value>memfd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </memoryBacking>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <devices>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <disk supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='diskDevice'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>disk</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>cdrom</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>floppy</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>lun</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='bus'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>fdc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>scsi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>sata</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-non-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <graphics supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vnc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>egl-headless</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dbus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </graphics>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <video supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='modelType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vga</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>cirrus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>none</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>bochs</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ramfb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </video>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <hostdev supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='mode'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>subsystem</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='startupPolicy'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>default</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>mandatory</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>requisite</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>optional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='subsysType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pci</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>scsi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='capsType'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='pciBackend'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </hostdev>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <rng supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtio-non-transitional</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>random</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>egd</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>builtin</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </rng>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <filesystem supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='driverType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>path</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>handle</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>virtiofs</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </filesystem>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <tpm supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tpm-tis</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tpm-crb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>emulator</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>external</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendVersion'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>2.0</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </tpm>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <redirdev supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='bus'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>usb</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </redirdev>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <channel supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pty</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>unix</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </channel>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <crypto supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>qemu</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendModel'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>builtin</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </crypto>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <interface supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='backendType'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>default</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>passt</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </interface>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <panic supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='model'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>isa</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>hyperv</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </panic>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <console supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='type'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>null</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vc</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pty</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dev</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>file</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>pipe</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>stdio</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>udp</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tcp</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>unix</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>qemu-vdagent</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>dbus</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </console>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </devices>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  <features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <gic supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <vmcoreinfo supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <genid supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <backingStoreInput supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <backup supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <async-teardown supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <s390-pv supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <ps2 supported='yes'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <tdx supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <sev supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <sgx supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <hyperv supported='yes'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <enum name='features'>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>relaxed</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vapic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>spinlocks</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vpindex</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>runtime</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>synic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>stimer</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>reset</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>vendor_id</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>frequencies</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>reenlightenment</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>tlbflush</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>ipi</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>avic</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>emsr_bitmap</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <value>xmm_input</value>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </enum>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      <defaults>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <spinlocks>4095</spinlocks>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <stimer_direct>on</stimer_direct>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:      </defaults>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    </hyperv>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:    <launchSecurity supported='no'/>
Jan 21 11:25:30 np0005590810 nova_compute[251104]:  </features>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: </domainCapabilities>
Jan 21 11:25:30 np0005590810 nova_compute[251104]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.391 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.392 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.392 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.399 251108 INFO nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Secure Boot support detected#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.402 251108 INFO nova.virt.libvirt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.403 251108 INFO nova.virt.libvirt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.415 251108 DEBUG nova.virt.libvirt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.457 251108 INFO nova.virt.node [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Determined node identity 2519faba-4002-49a2-b483-5098e748d2b5 from /var/lib/nova/compute_id#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.475 251108 WARNING nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Compute nodes ['2519faba-4002-49a2-b483-5098e748d2b5'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.509 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.531 251108 WARNING nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.531 251108 DEBUG oslo_concurrency.lockutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.531 251108 DEBUG oslo_concurrency.lockutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.532 251108 DEBUG oslo_concurrency.lockutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.532 251108 DEBUG nova.compute.resource_tracker [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.532 251108 DEBUG oslo_concurrency.processutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:25:30 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162530 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:25:30 np0005590810 systemd[1]: session-54.scope: Deactivated successfully.
Jan 21 11:25:30 np0005590810 systemd[1]: session-54.scope: Consumed 2min 10.494s CPU time.
Jan 21 11:25:30 np0005590810 systemd-logind[795]: Session 54 logged out. Waiting for processes to exit.
Jan 21 11:25:30 np0005590810 systemd-logind[795]: Removed session 54.
Jan 21 11:25:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:25:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3029943125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:25:30 np0005590810 nova_compute[251104]: 2026-01-21 16:25:30.987 251108 DEBUG oslo_concurrency.processutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:25:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 21 11:25:31 np0005590810 systemd[1]: Starting libvirt nodedev daemon...
Jan 21 11:25:31 np0005590810 systemd[1]: Started libvirt nodedev daemon.
Jan 21 11:25:31 np0005590810 nova_compute[251104]: 2026-01-21 16:25:31.319 251108 WARNING nova.virt.libvirt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:25:31 np0005590810 nova_compute[251104]: 2026-01-21 16:25:31.320 251108 DEBUG nova.compute.resource_tracker [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4937MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:25:31 np0005590810 nova_compute[251104]: 2026-01-21 16:25:31.321 251108 DEBUG oslo_concurrency.lockutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:25:31 np0005590810 nova_compute[251104]: 2026-01-21 16:25:31.321 251108 DEBUG oslo_concurrency.lockutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:25:31 np0005590810 nova_compute[251104]: 2026-01-21 16:25:31.356 251108 WARNING nova.compute.resource_tracker [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] No compute node record for compute-0.ctlplane.example.com:2519faba-4002-49a2-b483-5098e748d2b5: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 2519faba-4002-49a2-b483-5098e748d2b5 could not be found.#033[00m
Jan 21 11:25:31 np0005590810 nova_compute[251104]: 2026-01-21 16:25:31.386 251108 INFO nova.compute.resource_tracker [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 2519faba-4002-49a2-b483-5098e748d2b5#033[00m
Jan 21 11:25:31 np0005590810 nova_compute[251104]: 2026-01-21 16:25:31.495 251108 DEBUG nova.compute.resource_tracker [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:25:31 np0005590810 nova_compute[251104]: 2026-01-21 16:25:31.496 251108 DEBUG nova.compute.resource_tracker [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:25:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:31.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:32.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.282 251108 INFO nova.scheduler.client.report [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [req-83931cc6-655c-4e2d-8cf7-0ea8f0fb4d66] Created resource provider record via placement API for resource provider with UUID 2519faba-4002-49a2-b483-5098e748d2b5 and name compute-0.ctlplane.example.com.#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.299 251108 DEBUG oslo_concurrency.processutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:25:32 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:25:32 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232751038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.766 251108 DEBUG oslo_concurrency.processutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.773 251108 DEBUG nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 21 11:25:32 np0005590810 nova_compute[251104]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.773 251108 INFO nova.virt.libvirt.host [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.774 251108 DEBUG nova.compute.provider_tree [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Updating inventory in ProviderTree for provider 2519faba-4002-49a2-b483-5098e748d2b5 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.775 251108 DEBUG nova.virt.libvirt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.880 251108 DEBUG nova.scheduler.client.report [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Updated inventory for provider 2519faba-4002-49a2-b483-5098e748d2b5 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.880 251108 DEBUG nova.compute.provider_tree [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Updating resource provider 2519faba-4002-49a2-b483-5098e748d2b5 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.880 251108 DEBUG nova.compute.provider_tree [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Updating inventory in ProviderTree for provider 2519faba-4002-49a2-b483-5098e748d2b5 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 21 11:25:32 np0005590810 nova_compute[251104]: 2026-01-21 16:25:32.975 251108 DEBUG nova.compute.provider_tree [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Updating resource provider 2519faba-4002-49a2-b483-5098e748d2b5 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 21 11:25:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:25:33 np0005590810 nova_compute[251104]: 2026-01-21 16:25:33.012 251108 DEBUG nova.compute.resource_tracker [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:25:33 np0005590810 nova_compute[251104]: 2026-01-21 16:25:33.012 251108 DEBUG oslo_concurrency.lockutils [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:25:33 np0005590810 nova_compute[251104]: 2026-01-21 16:25:33.012 251108 DEBUG nova.service [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 21 11:25:33 np0005590810 nova_compute[251104]: 2026-01-21 16:25:33.084 251108 DEBUG nova.service [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 21 11:25:33 np0005590810 nova_compute[251104]: 2026-01-21 16:25:33.085 251108 DEBUG nova.servicegroup.drivers.db [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 21 11:25:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:34.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:25:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:35.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:35] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:25:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:35] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 21 11:25:35 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Scheduled restart job, restart counter is at 11.
Jan 21 11:25:35 np0005590810 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:25:35 np0005590810 systemd[1]: ceph-d9745984-fea8-5195-8ec5-61f685b5c785@nfs.cephfs.2.0.compute-0.mbatwb.service: Consumed 1.711s CPU time.
Jan 21 11:25:35 np0005590810 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785...
Jan 21 11:25:36 np0005590810 podman[251528]: 2026-01-21 16:25:36.095189249 +0000 UTC m=+0.046440442 container create 3d1c34b1fcac4a5c393befd56152af5c64038180fec2030af1c0f3f28ca73a81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:25:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b7a9b7f2e52a5f64815afdec46fbb3389039a450d13a597a1ed2a2919808fc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b7a9b7f2e52a5f64815afdec46fbb3389039a450d13a597a1ed2a2919808fc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b7a9b7f2e52a5f64815afdec46fbb3389039a450d13a597a1ed2a2919808fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:36 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b7a9b7f2e52a5f64815afdec46fbb3389039a450d13a597a1ed2a2919808fc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbatwb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:25:36 np0005590810 podman[251528]: 2026-01-21 16:25:36.159517608 +0000 UTC m=+0.110768821 container init 3d1c34b1fcac4a5c393befd56152af5c64038180fec2030af1c0f3f28ca73a81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:25:36 np0005590810 podman[251528]: 2026-01-21 16:25:36.164882721 +0000 UTC m=+0.116134054 container start 3d1c34b1fcac4a5c393befd56152af5c64038180fec2030af1c0f3f28ca73a81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 21 11:25:36 np0005590810 bash[251528]: 3d1c34b1fcac4a5c393befd56152af5c64038180fec2030af1c0f3f28ca73a81
Jan 21 11:25:36 np0005590810 podman[251528]: 2026-01-21 16:25:36.074756648 +0000 UTC m=+0.026007871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:25:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:36 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 21 11:25:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:36 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 21 11:25:36 np0005590810 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbatwb for d9745984-fea8-5195-8ec5-61f685b5c785.
Jan 21 11:25:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:36.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:36 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 21 11:25:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:36 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 21 11:25:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:36 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 21 11:25:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:36 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 21 11:25:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:36 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 21 11:25:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:36 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 21 11:25:36 np0005590810 podman[251586]: 2026-01-21 16:25:36.709143906 +0000 UTC m=+0.076615158 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:25:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:25:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:37.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:25:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:37.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:25:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:25:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:38.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:25:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:38.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:25:39
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['backups', 'images', '.mgr', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.data']
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:25:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:25:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:25:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:39.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:25:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:25:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:40.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Jan 21 11:25:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:41.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:42.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:42 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 21 11:25:42 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:42 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 21 11:25:42 np0005590810 podman[251638]: 2026-01-21 16:25:42.735105591 +0000 UTC m=+0.111664240 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 11:25:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:25:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:43.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:44.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 21 11:25:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:45.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:45] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Jan 21 11:25:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:45] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Jan 21 11:25:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:46.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:25:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:47.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:25:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:47.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:25:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:48 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:48.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:25:49 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:49 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/162550 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 21 11:25:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:50 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:50 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:50 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49dc000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 21 11:25:51 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:51 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:52.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:52 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:52 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:52 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:25:53 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:53 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49dc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:53.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:54.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:25:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:25:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:54 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:54 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:54 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:25:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:25:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:55 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:55.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:55] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 21 11:25:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:25:55] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 21 11:25:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:25:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:56.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:25:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:56 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49dc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:56 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:56 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 21 11:25:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:57.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:57 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:57.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:25:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:25:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:25:58.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:25:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:58 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:58 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49dc001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:58.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:25:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:25:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:25:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:25:59 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:25:59 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:25:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:25:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:25:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:25:59.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:26:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:26:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:26:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:26:00.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:26:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:26:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:26:00 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:26:00 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:26:00 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:26:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 21 11:26:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:26:01 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49dc002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:26:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:26:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 11:26:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:26:01.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 11:26:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:26:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:26:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:26:02.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:26:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:26:02 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:26:02 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-nfs-cephfs-2-0-compute-0-mbatwb[251544]: 21/01/2026 16:26:02 : epoch 6970fe00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 21 11:26:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 21 11:34:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:34:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:34:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:34:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:34:10 np0005590810 rsyslogd[1006]: imjournal: 7843 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 21 11:34:10 np0005590810 nova_compute[251104]: 2026-01-21 16:34:10.164 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Jan 21 11:34:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:10 np0005590810 nova_compute[251104]: 2026-01-21 16:34:10.966 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:11.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:34:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:12.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:34:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 21 11:34:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [WARNING] 020/163412 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 21 11:34:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz[95847]: [ALERT] 020/163412 (4) : backend 'backend' has no server available!
Jan 21 11:34:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:34:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:13.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:34:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:34:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:14.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:34:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:34:14 np0005590810 nova_compute[251104]: 2026-01-21 16:34:14.960 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:34:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:15 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:34:15 np0005590810 nova_compute[251104]: 2026-01-21 16:34:15.034 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:15 np0005590810 nova_compute[251104]: 2026-01-21 16:34:15.166 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:15 np0005590810 podman[261350]: 2026-01-21 16:34:15.351171908 +0000 UTC m=+0.043495305 container create 32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:34:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:15.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:15 np0005590810 systemd[1]: Started libpod-conmon-32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e.scope.
Jan 21 11:34:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:34:15 np0005590810 podman[261350]: 2026-01-21 16:34:15.332392615 +0000 UTC m=+0.024716032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:34:15 np0005590810 podman[261350]: 2026-01-21 16:34:15.438913151 +0000 UTC m=+0.131236578 container init 32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:34:15 np0005590810 podman[261350]: 2026-01-21 16:34:15.448144983 +0000 UTC m=+0.140468380 container start 32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:34:15 np0005590810 podman[261350]: 2026-01-21 16:34:15.452981276 +0000 UTC m=+0.145304703 container attach 32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 21 11:34:15 np0005590810 elegant_franklin[261367]: 167 167
Jan 21 11:34:15 np0005590810 systemd[1]: libpod-32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e.scope: Deactivated successfully.
Jan 21 11:34:15 np0005590810 conmon[261367]: conmon 32bff6bf3686632c1ab5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e.scope/container/memory.events
Jan 21 11:34:15 np0005590810 podman[261350]: 2026-01-21 16:34:15.456902309 +0000 UTC m=+0.149225706 container died 32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 21 11:34:15 np0005590810 systemd[1]: var-lib-containers-storage-overlay-1d18ff87bc609bad718b6db3fe2acd9086b7e8b8e4eadb9ac6ad264872cc4b05-merged.mount: Deactivated successfully.
Jan 21 11:34:15 np0005590810 podman[261350]: 2026-01-21 16:34:15.502388307 +0000 UTC m=+0.194711704 container remove 32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:34:15 np0005590810 systemd[1]: libpod-conmon-32bff6bf3686632c1ab521062b91189d15b53a4d9d115c8003f85334a919890e.scope: Deactivated successfully.
Jan 21 11:34:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 21 11:34:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:15] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 21 11:34:15 np0005590810 podman[261392]: 2026-01-21 16:34:15.684941027 +0000 UTC m=+0.048610398 container create 245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:34:15 np0005590810 systemd[1]: Started libpod-conmon-245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1.scope.
Jan 21 11:34:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:34:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9c44cd6803380dd70f502462174f6f75633df86f9b18650afbf6ef42b9f408/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9c44cd6803380dd70f502462174f6f75633df86f9b18650afbf6ef42b9f408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9c44cd6803380dd70f502462174f6f75633df86f9b18650afbf6ef42b9f408/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9c44cd6803380dd70f502462174f6f75633df86f9b18650afbf6ef42b9f408/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9c44cd6803380dd70f502462174f6f75633df86f9b18650afbf6ef42b9f408/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:15 np0005590810 podman[261392]: 2026-01-21 16:34:15.665798422 +0000 UTC m=+0.029467823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:34:15 np0005590810 podman[261392]: 2026-01-21 16:34:15.771185702 +0000 UTC m=+0.134855093 container init 245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 21 11:34:15 np0005590810 podman[261392]: 2026-01-21 16:34:15.77807068 +0000 UTC m=+0.141740041 container start 245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:34:15 np0005590810 podman[261392]: 2026-01-21 16:34:15.781413236 +0000 UTC m=+0.145082607 container attach 245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gauss, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:34:15 np0005590810 nova_compute[251104]: 2026-01-21 16:34:15.969 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:16 np0005590810 fervent_gauss[261409]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:34:16 np0005590810 fervent_gauss[261409]: --> All data devices are unavailable
Jan 21 11:34:16 np0005590810 systemd[1]: libpod-245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1.scope: Deactivated successfully.
Jan 21 11:34:16 np0005590810 podman[261392]: 2026-01-21 16:34:16.156642214 +0000 UTC m=+0.520311615 container died 245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gauss, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:34:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:34:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:16.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:34:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 21 11:34:16 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3c9c44cd6803380dd70f502462174f6f75633df86f9b18650afbf6ef42b9f408-merged.mount: Deactivated successfully.
Jan 21 11:34:16 np0005590810 podman[261392]: 2026-01-21 16:34:16.205443446 +0000 UTC m=+0.569112817 container remove 245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gauss, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:34:16 np0005590810 systemd[1]: libpod-conmon-245df4242079678552110247dd504a2d74183c5f64b327f0b7a8df7cabcd7cd1.scope: Deactivated successfully.
Jan 21 11:34:16 np0005590810 podman[261530]: 2026-01-21 16:34:16.75897272 +0000 UTC m=+0.025357913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:34:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:34:17.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:34:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:34:17.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:34:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:34:17.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:34:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:17 np0005590810 podman[261530]: 2026-01-21 16:34:17.88164229 +0000 UTC m=+1.148027463 container create 062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_curie, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:34:17 np0005590810 systemd[1]: Started libpod-conmon-062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621.scope.
Jan 21 11:34:17 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:34:17 np0005590810 podman[261530]: 2026-01-21 16:34:17.96104625 +0000 UTC m=+1.227431433 container init 062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_curie, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 11:34:17 np0005590810 podman[261530]: 2026-01-21 16:34:17.96867136 +0000 UTC m=+1.235056533 container start 062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:34:17 np0005590810 podman[261530]: 2026-01-21 16:34:17.97214662 +0000 UTC m=+1.238531793 container attach 062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_curie, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 21 11:34:17 np0005590810 intelligent_curie[261548]: 167 167
Jan 21 11:34:17 np0005590810 systemd[1]: libpod-062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621.scope: Deactivated successfully.
Jan 21 11:34:17 np0005590810 podman[261530]: 2026-01-21 16:34:17.977768468 +0000 UTC m=+1.244153651 container died 062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Jan 21 11:34:18 np0005590810 systemd[1]: var-lib-containers-storage-overlay-90e7085a03d25a012cd6cdbc0e0dc9fe4f3ad611caa372af7e8296b40b7417ad-merged.mount: Deactivated successfully.
Jan 21 11:34:18 np0005590810 podman[261530]: 2026-01-21 16:34:18.025409454 +0000 UTC m=+1.291794637 container remove 062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_curie, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:34:18 np0005590810 systemd[1]: libpod-conmon-062707c644b2847b20e329d6adc0b4805f2d8187036dfbf48d165e1a1f11d621.scope: Deactivated successfully.
Jan 21 11:34:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:18.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:34:18 np0005590810 podman[261571]: 2026-01-21 16:34:18.206329531 +0000 UTC m=+0.045791157 container create f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hermann, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:34:18 np0005590810 systemd[1]: Started libpod-conmon-f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853.scope.
Jan 21 11:34:18 np0005590810 podman[261571]: 2026-01-21 16:34:18.18791681 +0000 UTC m=+0.027378456 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:34:18 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:34:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66511f80e9120ca299b6795a771e6909f4853f49a25458fddb5aa6812dc9361/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66511f80e9120ca299b6795a771e6909f4853f49a25458fddb5aa6812dc9361/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66511f80e9120ca299b6795a771e6909f4853f49a25458fddb5aa6812dc9361/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:18 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66511f80e9120ca299b6795a771e6909f4853f49a25458fddb5aa6812dc9361/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:18 np0005590810 podman[261571]: 2026-01-21 16:34:18.309566014 +0000 UTC m=+0.149027640 container init f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hermann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 11:34:18 np0005590810 podman[261571]: 2026-01-21 16:34:18.316198384 +0000 UTC m=+0.155660010 container start f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 21 11:34:18 np0005590810 podman[261571]: 2026-01-21 16:34:18.319821308 +0000 UTC m=+0.159282964 container attach f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]: {
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:    "0": [
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:        {
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "devices": [
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "/dev/loop3"
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            ],
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "lv_name": "ceph_lv0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "lv_size": "21470642176",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "name": "ceph_lv0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "tags": {
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.cluster_name": "ceph",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.crush_device_class": "",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.encrypted": "0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.osd_id": "0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.type": "block",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.vdo": "0",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:                "ceph.with_tpm": "0"
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            },
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "type": "block",
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:            "vg_name": "ceph_vg0"
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:        }
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]:    ]
Jan 21 11:34:18 np0005590810 stupefied_hermann[261587]: }
Jan 21 11:34:18 np0005590810 systemd[1]: libpod-f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853.scope: Deactivated successfully.
Jan 21 11:34:18 np0005590810 podman[261571]: 2026-01-21 16:34:18.622478744 +0000 UTC m=+0.461940370 container died f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 11:34:18 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a66511f80e9120ca299b6795a771e6909f4853f49a25458fddb5aa6812dc9361-merged.mount: Deactivated successfully.
Jan 21 11:34:18 np0005590810 podman[261571]: 2026-01-21 16:34:18.670207822 +0000 UTC m=+0.509669448 container remove f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:34:18 np0005590810 systemd[1]: libpod-conmon-f6380883c3a98a6c1635da7bb62102b75e8b3229c2cd0e75c5363b6bdd991853.scope: Deactivated successfully.
Jan 21 11:34:19 np0005590810 podman[261702]: 2026-01-21 16:34:19.331710197 +0000 UTC m=+0.044573649 container create 946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:34:19 np0005590810 systemd[1]: Started libpod-conmon-946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd.scope.
Jan 21 11:34:19 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:34:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:19 np0005590810 podman[261702]: 2026-01-21 16:34:19.313822822 +0000 UTC m=+0.026686304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:34:19 np0005590810 podman[261702]: 2026-01-21 16:34:19.415025161 +0000 UTC m=+0.127888613 container init 946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:34:19 np0005590810 podman[261702]: 2026-01-21 16:34:19.422073713 +0000 UTC m=+0.134937175 container start 946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_gould, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:34:19 np0005590810 podman[261702]: 2026-01-21 16:34:19.425315096 +0000 UTC m=+0.138178558 container attach 946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:34:19 np0005590810 agitated_gould[261718]: 167 167
Jan 21 11:34:19 np0005590810 systemd[1]: libpod-946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd.scope: Deactivated successfully.
Jan 21 11:34:19 np0005590810 podman[261702]: 2026-01-21 16:34:19.428646611 +0000 UTC m=+0.141510063 container died 946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:34:19 np0005590810 systemd[1]: var-lib-containers-storage-overlay-dc62747b84cf1ceaf332f6526036f4004239dd8c8bfae20ed9329b241090daca-merged.mount: Deactivated successfully.
Jan 21 11:34:19 np0005590810 podman[261702]: 2026-01-21 16:34:19.464827315 +0000 UTC m=+0.177690757 container remove 946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:34:19 np0005590810 systemd[1]: libpod-conmon-946ee795d2af510f27faef0555ef3e78f117569a7b6cbef2a6545d9974d4cbdd.scope: Deactivated successfully.
Jan 21 11:34:19 np0005590810 podman[261744]: 2026-01-21 16:34:19.654054374 +0000 UTC m=+0.050204797 container create d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:34:19 np0005590810 systemd[1]: Started libpod-conmon-d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699.scope.
Jan 21 11:34:19 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:34:19 np0005590810 podman[261744]: 2026-01-21 16:34:19.631640446 +0000 UTC m=+0.027790889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:34:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fadf1dd75b891db01ad3fe8e5c76d99736d73d8a83e71429c19faab5d25ac9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fadf1dd75b891db01ad3fe8e5c76d99736d73d8a83e71429c19faab5d25ac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fadf1dd75b891db01ad3fe8e5c76d99736d73d8a83e71429c19faab5d25ac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:19 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fadf1dd75b891db01ad3fe8e5c76d99736d73d8a83e71429c19faab5d25ac9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:34:19 np0005590810 podman[261744]: 2026-01-21 16:34:19.740646851 +0000 UTC m=+0.136797284 container init d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 21 11:34:19 np0005590810 podman[261744]: 2026-01-21 16:34:19.746587729 +0000 UTC m=+0.142738132 container start d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lederberg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:34:19 np0005590810 podman[261744]: 2026-01-21 16:34:19.751142093 +0000 UTC m=+0.147292526 container attach d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lederberg, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:34:20 np0005590810 nova_compute[251104]: 2026-01-21 16:34:20.167 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:20.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:34:20 np0005590810 lvm[261834]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:34:20 np0005590810 lvm[261834]: VG ceph_vg0 finished
Jan 21 11:34:20 np0005590810 wonderful_lederberg[261760]: {}
Jan 21 11:34:20 np0005590810 systemd[1]: libpod-d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699.scope: Deactivated successfully.
Jan 21 11:34:20 np0005590810 systemd[1]: libpod-d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699.scope: Consumed 1.224s CPU time.
Jan 21 11:34:20 np0005590810 podman[261744]: 2026-01-21 16:34:20.519752514 +0000 UTC m=+0.915902947 container died d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lederberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 21 11:34:20 np0005590810 systemd[1]: var-lib-containers-storage-overlay-59fadf1dd75b891db01ad3fe8e5c76d99736d73d8a83e71429c19faab5d25ac9-merged.mount: Deactivated successfully.
Jan 21 11:34:20 np0005590810 podman[261744]: 2026-01-21 16:34:20.569866578 +0000 UTC m=+0.966016991 container remove d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lederberg, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:34:20 np0005590810 systemd[1]: libpod-conmon-d2a4b1a08c879ea0b5e547b74ffc3223b10e9916b5460c226c4fbc52abd8b699.scope: Deactivated successfully.
Jan 21 11:34:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:34:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:34:20 np0005590810 nova_compute[251104]: 2026-01-21 16:34:20.939 251108 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769013245.9380453, 944e7379-4f28-488d-8e47-7dda98eefdc6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:34:20 np0005590810 nova_compute[251104]: 2026-01-21 16:34:20.939 251108 INFO nova.compute.manager [-] [instance: 944e7379-4f28-488d-8e47-7dda98eefdc6] VM Stopped (Lifecycle Event)#033[00m
Jan 21 11:34:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:20 np0005590810 nova_compute[251104]: 2026-01-21 16:34:20.972 251108 DEBUG nova.compute.manager [None req-a9e5b807-8bb6-4442-b0b3-cfb140ce88e3 - - - - - -] [instance: 944e7379-4f28-488d-8e47-7dda98eefdc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:34:20 np0005590810 nova_compute[251104]: 2026-01-21 16:34:20.973 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:34:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:21.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:34:22.023 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:34:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:34:22.025 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:34:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:34:22.025 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:34:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:22.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 21 11:34:22 np0005590810 podman[261877]: 2026-01-21 16:34:22.701144464 +0000 UTC m=+0.068129755 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 21 11:34:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Jan 21 11:34:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:24.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:34:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:34:25 np0005590810 nova_compute[251104]: 2026-01-21 16:34:25.169 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:25] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 21 11:34:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:25] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 21 11:34:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:25 np0005590810 nova_compute[251104]: 2026-01-21 16:34:25.976 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Jan 21 11:34:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:26.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:34:27.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:34:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:27 np0005590810 podman[261929]: 2026-01-21 16:34:27.730522519 +0000 UTC m=+0.111092012 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 21 11:34:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 170 B/s wr, 1 op/s
Jan 21 11:34:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:28.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:29.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:30 np0005590810 nova_compute[251104]: 2026-01-21 16:34:30.173 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 170 B/s wr, 1 op/s
Jan 21 11:34:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:30.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:30 np0005590810 nova_compute[251104]: 2026-01-21 16:34:30.979 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:31.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Jan 21 11:34:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:32.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:33 np0005590810 nova_compute[251104]: 2026-01-21 16:34:33.364 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Jan 21 11:34:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:34.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:35 np0005590810 nova_compute[251104]: 2026-01-21 16:34:35.174 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:35 np0005590810 nova_compute[251104]: 2026-01-21 16:34:35.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:35] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 21 11:34:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:35] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 21 11:34:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:35 np0005590810 nova_compute[251104]: 2026-01-21 16:34:35.982 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:34:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:36.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:34:37.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:34:37 np0005590810 nova_compute[251104]: 2026-01-21 16:34:37.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:37 np0005590810 nova_compute[251104]: 2026-01-21 16:34:37.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:34:37 np0005590810 nova_compute[251104]: 2026-01-21 16:34:37.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:34:37 np0005590810 nova_compute[251104]: 2026-01-21 16:34:37.380 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:34:37 np0005590810 nova_compute[251104]: 2026-01-21 16:34:37.380 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:37 np0005590810 nova_compute[251104]: 2026-01-21 16:34:37.381 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:37.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:34:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:38.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:38 np0005590810 nova_compute[251104]: 2026-01-21 16:34:38.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:38 np0005590810 nova_compute[251104]: 2026-01-21 16:34:38.393 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:34:38 np0005590810 nova_compute[251104]: 2026-01-21 16:34:38.394 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:34:38 np0005590810 nova_compute[251104]: 2026-01-21 16:34:38.394 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:34:38 np0005590810 nova_compute[251104]: 2026-01-21 16:34:38.395 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:34:38 np0005590810 nova_compute[251104]: 2026-01-21 16:34:38.395 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:34:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:34:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801902156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:34:38 np0005590810 nova_compute[251104]: 2026-01-21 16:34:38.835 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.023 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.025 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4622MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.025 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.025 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.113 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.114 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.131 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:34:39
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['backups', '.rgw.root', '.nfs', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.data']
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:34:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:34:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:34:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:39.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:34:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:34:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2805549050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.628 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.635 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.655 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:34:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.685 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:34:39 np0005590810 nova_compute[251104]: 2026-01-21 16:34:39.686 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:34:40 np0005590810 nova_compute[251104]: 2026-01-21 16:34:40.177 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:34:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:40.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:40 np0005590810 nova_compute[251104]: 2026-01-21 16:34:40.985 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:34:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 9295 writes, 35K keys, 9295 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9295 writes, 2129 syncs, 4.37 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1166 writes, 3974 keys, 1166 commit groups, 1.0 writes per commit group, ingest: 3.63 MB, 0.01 MB/s#012Interval WAL: 1166 writes, 474 syncs, 2.46 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 11:34:41 np0005590810 nova_compute[251104]: 2026-01-21 16:34:41.687 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:41 np0005590810 nova_compute[251104]: 2026-01-21 16:34:41.687 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:41 np0005590810 nova_compute[251104]: 2026-01-21 16:34:41.687 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:34:41 np0005590810 nova_compute[251104]: 2026-01-21 16:34:41.688 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:34:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 21 11:34:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:43.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 21 11:34:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:44.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:45 np0005590810 nova_compute[251104]: 2026-01-21 16:34:45.180 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:45.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:45] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Jan 21 11:34:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:45] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Jan 21 11:34:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:45 np0005590810 nova_compute[251104]: 2026-01-21 16:34:45.988 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 21 11:34:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:46.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:34:47.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:34:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:47.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 21 11:34:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:48.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:48 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:34:48.884 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:34:48 np0005590810 nova_compute[251104]: 2026-01-21 16:34:48.885 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:48 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:34:48.886 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:34:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:49.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:50 np0005590810 nova_compute[251104]: 2026-01-21 16:34:50.181 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 21 11:34:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:34:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:50.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:34:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:50 np0005590810 nova_compute[251104]: 2026-01-21 16:34:50.991 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:51.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 21 11:34:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:52.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:53 np0005590810 podman[262052]: 2026-01-21 16:34:53.674400539 +0000 UTC m=+0.050982283 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Jan 21 11:34:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 21 11:34:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:54.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:34:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:34:54 np0005590810 ovn_controller[152632]: 2026-01-21T16:34:54Z|00046|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 21 11:34:55 np0005590810 nova_compute[251104]: 2026-01-21 16:34:55.183 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:55.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:55] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:34:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:34:55] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:34:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:34:55 np0005590810 nova_compute[251104]: 2026-01-21 16:34:55.993 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:34:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 223 KiB/s wr, 76 op/s
Jan 21 11:34:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:56.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:56 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:34:56.888 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:34:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:34:57.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:34:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:57.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:34:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 211 KiB/s wr, 4 op/s
Jan 21 11:34:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:34:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:34:58.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:34:58 np0005590810 podman[262075]: 2026-01-21 16:34:58.716389414 +0000 UTC m=+0.091044108 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 11:34:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:34:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:34:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:34:59.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:00 np0005590810 nova_compute[251104]: 2026-01-21 16:35:00.185 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 211 KiB/s wr, 4 op/s
Jan 21 11:35:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:00.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:00 np0005590810 nova_compute[251104]: 2026-01-21 16:35:00.996 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:01.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 195 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 21 11:35:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:02.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:03.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 21 11:35:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:04.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:05 np0005590810 nova_compute[251104]: 2026-01-21 16:35:05.188 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:05.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:05] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:35:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:05] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:35:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:06 np0005590810 nova_compute[251104]: 2026-01-21 16:35:05.999 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 21 11:35:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:06.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:35:07.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:35:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:07.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 1.9 MiB/s wr, 56 op/s
Jan 21 11:35:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:08.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:35:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:35:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:35:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:35:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:09.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:35:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:35:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:35:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:35:10 np0005590810 nova_compute[251104]: 2026-01-21 16:35:10.189 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 1.9 MiB/s wr, 56 op/s
Jan 21 11:35:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:10.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:11 np0005590810 nova_compute[251104]: 2026-01-21 16:35:11.002 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 1.9 MiB/s wr, 57 op/s
Jan 21 11:35:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:12.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:13.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:13 np0005590810 nova_compute[251104]: 2026-01-21 16:35:13.949 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "916b9de7-c0f7-499a-b45d-2b546ae37790" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:13 np0005590810 nova_compute[251104]: 2026-01-21 16:35:13.950 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:13 np0005590810 nova_compute[251104]: 2026-01-21 16:35:13.975 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.075 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.076 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.084 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.085 251108 INFO nova.compute.claims [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.199 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Jan 21 11:35:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:14.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:35:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476907388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.722 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.731 251108 DEBUG nova.compute.provider_tree [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.753 251108 DEBUG nova.scheduler.client.report [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.776 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.777 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.824 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.825 251108 DEBUG nova.network.neutron [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.844 251108 INFO nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.862 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.965 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.966 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 21 11:35:14 np0005590810 nova_compute[251104]: 2026-01-21 16:35:14.967 251108 INFO nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Creating image(s)#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.000 251108 DEBUG nova.storage.rbd_utils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 916b9de7-c0f7-499a-b45d-2b546ae37790_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.035 251108 DEBUG nova.storage.rbd_utils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 916b9de7-c0f7-499a-b45d-2b546ae37790_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.071 251108 DEBUG nova.storage.rbd_utils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 916b9de7-c0f7-499a-b45d-2b546ae37790_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.077 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.106 251108 DEBUG nova.policy [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '918cf3fb78394ce8b3ade91a1ad699fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d6214185b004f9c9798abfc29d1ae14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.154 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.155 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.156 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.156 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.192 251108 DEBUG nova.storage.rbd_utils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 916b9de7-c0f7-499a-b45d-2b546ae37790_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.198 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 916b9de7-c0f7-499a-b45d-2b546ae37790_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.222 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:15.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.543 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 916b9de7-c0f7-499a-b45d-2b546ae37790_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:15] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:35:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:15] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.648 251108 DEBUG nova.storage.rbd_utils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] resizing rbd image 916b9de7-c0f7-499a-b45d-2b546ae37790_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 21 11:35:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.771 251108 DEBUG nova.objects.instance [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'migration_context' on Instance uuid 916b9de7-c0f7-499a-b45d-2b546ae37790 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.786 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.786 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Ensure instance console log exists: /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.787 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.787 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.787 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:15 np0005590810 nova_compute[251104]: 2026-01-21 16:35:15.933 251108 DEBUG nova.network.neutron [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Successfully created port: 8f623775-f44e-448c-8b71-2d3cece257a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 21 11:35:16 np0005590810 nova_compute[251104]: 2026-01-21 16:35:16.004 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 167 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 18 op/s
Jan 21 11:35:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:17 np0005590810 nova_compute[251104]: 2026-01-21 16:35:17.037 251108 DEBUG nova.network.neutron [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Successfully updated port: 8f623775-f44e-448c-8b71-2d3cece257a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 21 11:35:17 np0005590810 nova_compute[251104]: 2026-01-21 16:35:17.056 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:35:17 np0005590810 nova_compute[251104]: 2026-01-21 16:35:17.056 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquired lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:35:17 np0005590810 nova_compute[251104]: 2026-01-21 16:35:17.057 251108 DEBUG nova.network.neutron [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 21 11:35:17 np0005590810 nova_compute[251104]: 2026-01-21 16:35:17.160 251108 DEBUG nova.compute.manager [req-decac920-fe5f-47c4-9548-82f363d9d42e req-3c766e47-e0ef-49f3-865d-9d7e93b25364 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received event network-changed-8f623775-f44e-448c-8b71-2d3cece257a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:35:17 np0005590810 nova_compute[251104]: 2026-01-21 16:35:17.160 251108 DEBUG nova.compute.manager [req-decac920-fe5f-47c4-9548-82f363d9d42e req-3c766e47-e0ef-49f3-865d-9d7e93b25364 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Refreshing instance network info cache due to event network-changed-8f623775-f44e-448c-8b71-2d3cece257a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 21 11:35:17 np0005590810 nova_compute[251104]: 2026-01-21 16:35:17.161 251108 DEBUG oslo_concurrency.lockutils [req-decac920-fe5f-47c4-9548-82f363d9d42e req-3c766e47-e0ef-49f3-865d-9d7e93b25364 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:35:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:35:17.159Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:35:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:35:17.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:35:17 np0005590810 nova_compute[251104]: 2026-01-21 16:35:17.227 251108 DEBUG nova.network.neutron [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 21 11:35:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:17.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.160 251108 DEBUG nova.network.neutron [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Updating instance_info_cache with network_info: [{"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.184 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Releasing lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.185 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Instance network_info: |[{"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.185 251108 DEBUG oslo_concurrency.lockutils [req-decac920-fe5f-47c4-9548-82f363d9d42e req-3c766e47-e0ef-49f3-865d-9d7e93b25364 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquired lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.186 251108 DEBUG nova.network.neutron [req-decac920-fe5f-47c4-9548-82f363d9d42e req-3c766e47-e0ef-49f3-865d-9d7e93b25364 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Refreshing network info cache for port 8f623775-f44e-448c-8b71-2d3cece257a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.189 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Start _get_guest_xml network_info=[{"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-21T16:29:46Z,direct_url=<?>,disk_format='qcow2',id=437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ad455439fcc6470fa721af543ff96c56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-21T16:29:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'guest_format': None, 'size': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_format': None, 'image_id': '437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.196 251108 WARNING nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.200 251108 DEBUG nova.virt.libvirt.host [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.201 251108 DEBUG nova.virt.libvirt.host [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.208 251108 DEBUG nova.virt.libvirt.host [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.209 251108 DEBUG nova.virt.libvirt.host [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.210 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.210 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-21T16:29:45Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1e6b96db-db66-4485-bb89-2da0df7b45b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-21T16:29:46Z,direct_url=<?>,disk_format='qcow2',id=437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ad455439fcc6470fa721af543ff96c56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-21T16:29:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.210 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.211 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.211 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.211 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.211 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.212 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.212 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.212 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.212 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.213 251108 DEBUG nova.virt.hardware [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.216 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 167 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Jan 21 11:35:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:18.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 11:35:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/387489444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.708 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.738 251108 DEBUG nova.storage.rbd_utils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 916b9de7-c0f7-499a-b45d-2b546ae37790_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:35:18 np0005590810 nova_compute[251104]: 2026-01-21 16:35:18.743 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 11:35:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723013927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.207 251108 DEBUG nova.network.neutron [req-decac920-fe5f-47c4-9548-82f363d9d42e req-3c766e47-e0ef-49f3-865d-9d7e93b25364 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Updated VIF entry in instance network info cache for port 8f623775-f44e-448c-8b71-2d3cece257a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.208 251108 DEBUG nova.network.neutron [req-decac920-fe5f-47c4-9548-82f363d9d42e req-3c766e47-e0ef-49f3-865d-9d7e93b25364 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Updating instance_info_cache with network_info: [{"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.230 251108 DEBUG oslo_concurrency.lockutils [req-decac920-fe5f-47c4-9548-82f363d9d42e req-3c766e47-e0ef-49f3-865d-9d7e93b25364 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Releasing lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.232 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.233 251108 DEBUG nova.virt.libvirt.vif [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-21T16:35:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1304058228',display_name='tempest-TestNetworkBasicOps-server-1304058228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1304058228',id=5,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAODY0oPCHyc4+PjQ1725+nevOCWRrwWD4hHxtkHr9gLP39zHubzKPjYJSqNgg3dm+06/jcbQU4KzBZGc283KicJsmRyBqlgO57i4dMI+5UaV5ILKMUSPd1Pbh0vRFjh0g==',key_name='tempest-TestNetworkBasicOps-53645135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-qi8jqxbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-21T16:35:14Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=916b9de7-c0f7-499a-b45d-2b546ae37790,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.234 251108 DEBUG nova.network.os_vif_util [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.235 251108 DEBUG nova.network.os_vif_util [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:28:5b,bridge_name='br-int',has_traffic_filtering=True,id=8f623775-f44e-448c-8b71-2d3cece257a2,network=Network(aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f623775-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.236 251108 DEBUG nova.objects.instance [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 916b9de7-c0f7-499a-b45d-2b546ae37790 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.254 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] End _get_guest_xml xml=<domain type="kvm">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <uuid>916b9de7-c0f7-499a-b45d-2b546ae37790</uuid>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <name>instance-00000005</name>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <memory>131072</memory>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <vcpu>1</vcpu>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <metadata>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <nova:name>tempest-TestNetworkBasicOps-server-1304058228</nova:name>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <nova:creationTime>2026-01-21 16:35:18</nova:creationTime>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <nova:flavor name="m1.nano">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <nova:memory>128</nova:memory>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <nova:disk>1</nova:disk>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <nova:swap>0</nova:swap>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <nova:ephemeral>0</nova:ephemeral>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <nova:vcpus>1</nova:vcpus>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      </nova:flavor>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <nova:owner>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <nova:user uuid="918cf3fb78394ce8b3ade91a1ad699fc">tempest-TestNetworkBasicOps-1793517209-project-member</nova:user>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <nova:project uuid="3d6214185b004f9c9798abfc29d1ae14">tempest-TestNetworkBasicOps-1793517209</nova:project>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      </nova:owner>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <nova:root type="image" uuid="437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <nova:ports>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <nova:port uuid="8f623775-f44e-448c-8b71-2d3cece257a2">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        </nova:port>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      </nova:ports>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </nova:instance>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  </metadata>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <sysinfo type="smbios">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <system>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <entry name="manufacturer">RDO</entry>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <entry name="product">OpenStack Compute</entry>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <entry name="serial">916b9de7-c0f7-499a-b45d-2b546ae37790</entry>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <entry name="uuid">916b9de7-c0f7-499a-b45d-2b546ae37790</entry>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <entry name="family">Virtual Machine</entry>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </system>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  </sysinfo>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <os>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <boot dev="hd"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <smbios mode="sysinfo"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  </os>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <features>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <acpi/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <apic/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <vmcoreinfo/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  </features>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <clock offset="utc">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <timer name="pit" tickpolicy="delay"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <timer name="hpet" present="no"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  </clock>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <cpu mode="host-model" match="exact">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <topology sockets="1" cores="1" threads="1"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  </cpu>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  <devices>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <disk type="network" device="disk">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <driver type="raw" cache="none"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <source protocol="rbd" name="vms/916b9de7-c0f7-499a-b45d-2b546ae37790_disk">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <host name="192.168.122.100" port="6789"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <host name="192.168.122.102" port="6789"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <host name="192.168.122.101" port="6789"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      </source>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <auth username="openstack">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <secret type="ceph" uuid="d9745984-fea8-5195-8ec5-61f685b5c785"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      </auth>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <target dev="vda" bus="virtio"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <disk type="network" device="cdrom">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <driver type="raw" cache="none"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <source protocol="rbd" name="vms/916b9de7-c0f7-499a-b45d-2b546ae37790_disk.config">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <host name="192.168.122.100" port="6789"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <host name="192.168.122.102" port="6789"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <host name="192.168.122.101" port="6789"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      </source>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <auth username="openstack">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:        <secret type="ceph" uuid="d9745984-fea8-5195-8ec5-61f685b5c785"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      </auth>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <target dev="sda" bus="sata"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <interface type="ethernet">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <mac address="fa:16:3e:86:28:5b"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <model type="virtio"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <driver name="vhost" rx_queue_size="512"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <mtu size="1442"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <target dev="tap8f623775-f4"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </interface>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <serial type="pty">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <log file="/var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790/console.log" append="off"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </serial>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <video>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <model type="virtio"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </video>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <input type="tablet" bus="usb"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <rng model="virtio">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <backend model="random">/dev/urandom</backend>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </rng>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <controller type="usb" index="0"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    <memballoon model="virtio">
Jan 21 11:35:19 np0005590810 nova_compute[251104]:      <stats period="10"/>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:    </memballoon>
Jan 21 11:35:19 np0005590810 nova_compute[251104]:  </devices>
Jan 21 11:35:19 np0005590810 nova_compute[251104]: </domain>
Jan 21 11:35:19 np0005590810 nova_compute[251104]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.255 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Preparing to wait for external event network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.256 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.256 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.257 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.257 251108 DEBUG nova.virt.libvirt.vif [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-21T16:35:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1304058228',display_name='tempest-TestNetworkBasicOps-server-1304058228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1304058228',id=5,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAODY0oPCHyc4+PjQ1725+nevOCWRrwWD4hHxtkHr9gLP39zHubzKPjYJSqNgg3dm+06/jcbQU4KzBZGc283KicJsmRyBqlgO57i4dMI+5UaV5ILKMUSPd1Pbh0vRFjh0g==',key_name='tempest-TestNetworkBasicOps-53645135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-qi8jqxbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-21T16:35:14Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=916b9de7-c0f7-499a-b45d-2b546ae37790,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.258 251108 DEBUG nova.network.os_vif_util [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.258 251108 DEBUG nova.network.os_vif_util [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:28:5b,bridge_name='br-int',has_traffic_filtering=True,id=8f623775-f44e-448c-8b71-2d3cece257a2,network=Network(aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f623775-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.259 251108 DEBUG os_vif [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:28:5b,bridge_name='br-int',has_traffic_filtering=True,id=8f623775-f44e-448c-8b71-2d3cece257a2,network=Network(aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f623775-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.260 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.260 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.261 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.265 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.266 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f623775-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.266 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f623775-f4, col_values=(('external_ids', {'iface-id': '8f623775-f44e-448c-8b71-2d3cece257a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:28:5b', 'vm-uuid': '916b9de7-c0f7-499a-b45d-2b546ae37790'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:19 np0005590810 NetworkManager[48894]: <info>  [1769013319.2691] manager: (tap8f623775-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.270 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.277 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.279 251108 INFO os_vif [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:28:5b,bridge_name='br-int',has_traffic_filtering=True,id=8f623775-f44e-448c-8b71-2d3cece257a2,network=Network(aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f623775-f4')#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.334 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.334 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.335 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No VIF found with MAC fa:16:3e:86:28:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.335 251108 INFO nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Using config drive#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.363 251108 DEBUG nova.storage.rbd_utils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 916b9de7-c0f7-499a-b45d-2b546ae37790_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:35:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:19.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.656 251108 INFO nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Creating config drive at /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790/disk.config#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.662 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbptsxvba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.788 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbptsxvba" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.828 251108 DEBUG nova.storage.rbd_utils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 916b9de7-c0f7-499a-b45d-2b546ae37790_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:35:19 np0005590810 nova_compute[251104]: 2026-01-21 16:35:19.832 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790/disk.config 916b9de7-c0f7-499a-b45d-2b546ae37790_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.004 251108 DEBUG oslo_concurrency.processutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790/disk.config 916b9de7-c0f7-499a-b45d-2b546ae37790_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.005 251108 INFO nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Deleting local config drive /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790/disk.config because it was imported into RBD.#033[00m
Jan 21 11:35:20 np0005590810 kernel: tap8f623775-f4: entered promiscuous mode
Jan 21 11:35:20 np0005590810 NetworkManager[48894]: <info>  [1769013320.0608] manager: (tap8f623775-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Jan 21 11:35:20 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:20Z|00047|binding|INFO|Claiming lport 8f623775-f44e-448c-8b71-2d3cece257a2 for this chassis.
Jan 21 11:35:20 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:20Z|00048|binding|INFO|8f623775-f44e-448c-8b71-2d3cece257a2: Claiming fa:16:3e:86:28:5b 10.100.0.4
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.061 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.066 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.071 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.078 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:28:5b 10.100.0.4'], port_security=['fa:16:3e:86:28:5b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '916b9de7-c0f7-499a-b45d-2b546ae37790', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d6214185b004f9c9798abfc29d1ae14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89656e6d-0ecf-4dec-802e-454a01b90ef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a5c18f9-d28d-4536-a1f2-7252480fabee, chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], logical_port=8f623775-f44e-448c-8b71-2d3cece257a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.080 163593 INFO neutron.agent.ovn.metadata.agent [-] Port 8f623775-f44e-448c-8b71-2d3cece257a2 in datapath aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67 bound to our chassis#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.081 163593 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.093 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf57d80-322a-4630-9cc7-c7a5e68353cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.095 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa5f9bb7-f1 in ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 21 11:35:20 np0005590810 systemd-udevd[262473]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 11:35:20 np0005590810 systemd-machined[217254]: New machine qemu-2-instance-00000005.
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.098 260432 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa5f9bb7-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.099 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[d22d68bd-17d2-48b7-8587-2ec788c79743]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.101 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[7b24c3f1-43fa-49a6-8a7b-7a7b18f1404d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 NetworkManager[48894]: <info>  [1769013320.1151] device (tap8f623775-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.115 163844 DEBUG oslo.privsep.daemon [-] privsep: reply[7f984427-4666-4713-8eb0-e99cc6ad23d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 NetworkManager[48894]: <info>  [1769013320.1179] device (tap8f623775-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 21 11:35:20 np0005590810 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.144 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.143 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[92d9ff9f-4636-47ad-83cd-02a212fe68ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:20Z|00049|binding|INFO|Setting lport 8f623775-f44e-448c-8b71-2d3cece257a2 ovn-installed in OVS
Jan 21 11:35:20 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:20Z|00050|binding|INFO|Setting lport 8f623775-f44e-448c-8b71-2d3cece257a2 up in Southbound
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.151 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.174 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[ed37f6f1-e686-4d00-9a93-f706d64f48d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 systemd-udevd[262476]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 11:35:20 np0005590810 NetworkManager[48894]: <info>  [1769013320.1833] manager: (tapaa5f9bb7-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.182 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4907d2-6109-4007-8804-963d0ace0243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.193 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 167 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.224 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[6c14255d-8ac2-4954-80b4-dfab46b4de0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.227 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[d77b55cc-7953-4d80-91c8-0fec63de165f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:35:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:20.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:35:20 np0005590810 NetworkManager[48894]: <info>  [1769013320.2500] device (tapaa5f9bb7-f0): carrier: link connected
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.255 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[95ee8a6e-16c7-4dfa-be02-ffb7f92d18c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.271 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[19993d25-3c2b-4241-989c-cbb9b4144627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa5f9bb7-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:54:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453692, 'reachable_time': 27468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262506, 'error': None, 'target': 'ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.290 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[f89d217e-9b06-440a-801b-cbcbf44445e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:541d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453692, 'tstamp': 453692}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262507, 'error': None, 'target': 'ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.308 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a0ca40-c632-4a09-abba-4ad31b994762]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa5f9bb7-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:54:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453692, 'reachable_time': 27468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262508, 'error': None, 'target': 'ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.341 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[296172b3-f179-4278-9eff-34edf568da11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.409 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2b2b09-d721-461d-86b0-46b76229c48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.411 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa5f9bb7-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.411 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.412 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa5f9bb7-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:20 np0005590810 kernel: tapaa5f9bb7-f0: entered promiscuous mode
Jan 21 11:35:20 np0005590810 NetworkManager[48894]: <info>  [1769013320.4171] manager: (tapaa5f9bb7-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.417 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa5f9bb7-f0, col_values=(('external_ids', {'iface-id': '62e3e5c8-afd6-42fd-96f0-ce5d7eb2809d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:20 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:20Z|00051|binding|INFO|Releasing lport 62e3e5c8-afd6-42fd-96f0-ce5d7eb2809d from this chassis (sb_readonly=0)
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.432 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.439 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.440 163593 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.442 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[367b701a-4240-42ea-8dc5-1d8b024c5618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.443 163593 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: global
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    log         /dev/log local0 debug
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    log-tag     haproxy-metadata-proxy-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    user        root
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    group       root
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    maxconn     1024
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    pidfile     /var/lib/neutron/external/pids/aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67.pid.haproxy
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    daemon
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: defaults
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    log global
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    mode http
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    option httplog
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    option dontlognull
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    option http-server-close
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    option forwardfor
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    retries                 3
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    timeout http-request    30s
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    timeout connect         30s
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    timeout client          32s
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    timeout server          32s
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    timeout http-keep-alive 30s
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: listen listener
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    bind 169.254.169.254:80
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    server metadata /var/lib/neutron/metadata_proxy
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]:    http-request add-header X-OVN-Network-ID aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 21 11:35:20 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:20.444 163593 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67', 'env', 'PROCESS_TAG=haproxy-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 21 11:35:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.753 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013320.7525861, 916b9de7-c0f7-499a-b45d-2b546ae37790 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.753 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] VM Started (Lifecycle Event)#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.780 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.784 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013320.7527602, 916b9de7-c0f7-499a-b45d-2b546ae37790 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.785 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] VM Paused (Lifecycle Event)#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.805 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.809 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 21 11:35:20 np0005590810 nova_compute[251104]: 2026-01-21 16:35:20.827 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 21 11:35:20 np0005590810 podman[262582]: 2026-01-21 16:35:20.834298191 +0000 UTC m=+0.058032709 container create dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:35:20 np0005590810 systemd[1]: Started libpod-conmon-dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5.scope.
Jan 21 11:35:20 np0005590810 podman[262582]: 2026-01-21 16:35:20.805554015 +0000 UTC m=+0.029288553 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 11:35:20 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:20 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08477096d1347d49652bfcb20aecdee0f1269fce6683b3735c6ad0818a21d89e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:20 np0005590810 podman[262582]: 2026-01-21 16:35:20.924926967 +0000 UTC m=+0.148661515 container init dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:35:20 np0005590810 podman[262582]: 2026-01-21 16:35:20.930413 +0000 UTC m=+0.154147518 container start dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 11:35:20 np0005590810 neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67[262598]: [NOTICE]   (262602) : New worker (262604) forked
Jan 21 11:35:20 np0005590810 neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67[262598]: [NOTICE]   (262602) : Loading success.
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.100 251108 DEBUG nova.compute.manager [req-30d5844f-52c1-4e68-819a-9a07a7b84f27 req-d3e646ab-27ff-4c28-b547-7e4c4495d312 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received event network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.101 251108 DEBUG oslo_concurrency.lockutils [req-30d5844f-52c1-4e68-819a-9a07a7b84f27 req-d3e646ab-27ff-4c28-b547-7e4c4495d312 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.101 251108 DEBUG oslo_concurrency.lockutils [req-30d5844f-52c1-4e68-819a-9a07a7b84f27 req-d3e646ab-27ff-4c28-b547-7e4c4495d312 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.101 251108 DEBUG oslo_concurrency.lockutils [req-30d5844f-52c1-4e68-819a-9a07a7b84f27 req-d3e646ab-27ff-4c28-b547-7e4c4495d312 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.102 251108 DEBUG nova.compute.manager [req-30d5844f-52c1-4e68-819a-9a07a7b84f27 req-d3e646ab-27ff-4c28-b547-7e4c4495d312 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Processing event network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.102 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.107 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013321.1074307, 916b9de7-c0f7-499a-b45d-2b546ae37790 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.108 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] VM Resumed (Lifecycle Event)#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.110 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.114 251108 INFO nova.virt.libvirt.driver [-] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Instance spawned successfully.#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.114 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.152 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.160 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.166 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.166 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.167 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.168 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.168 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.169 251108 DEBUG nova.virt.libvirt.driver [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.181 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.223 251108 INFO nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Took 6.26 seconds to spawn the instance on the hypervisor.#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.223 251108 DEBUG nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.292 251108 INFO nova.compute.manager [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Took 7.26 seconds to build instance.#033[00m
Jan 21 11:35:21 np0005590810 nova_compute[251104]: 2026-01-21 16:35:21.338 251108 DEBUG oslo_concurrency.lockutils [None req-6d1037ea-5d19-46b7-8c47-b2752425082b 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:21.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:35:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:35:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:35:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:35:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:22.024 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:22.025 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:22.027 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 21 11:35:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:22.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:22 np0005590810 podman[262785]: 2026-01-21 16:35:22.584897342 +0000 UTC m=+0.047350434 container create 2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_einstein, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:35:22 np0005590810 systemd[1]: Started libpod-conmon-2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f.scope.
Jan 21 11:35:22 np0005590810 podman[262785]: 2026-01-21 16:35:22.566370378 +0000 UTC m=+0.028823500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:35:22 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:22 np0005590810 podman[262785]: 2026-01-21 16:35:22.707590359 +0000 UTC m=+0.170043471 container init 2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_einstein, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:35:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:22 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:22 np0005590810 podman[262785]: 2026-01-21 16:35:22.716047836 +0000 UTC m=+0.178500928 container start 2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:35:22 np0005590810 podman[262785]: 2026-01-21 16:35:22.723012115 +0000 UTC m=+0.185465207 container attach 2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_einstein, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:22 np0005590810 gifted_einstein[262802]: 167 167
Jan 21 11:35:22 np0005590810 systemd[1]: libpod-2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f.scope: Deactivated successfully.
Jan 21 11:35:22 np0005590810 conmon[262802]: conmon 2e1606773b247ac5f7b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f.scope/container/memory.events
Jan 21 11:35:22 np0005590810 podman[262785]: 2026-01-21 16:35:22.726454353 +0000 UTC m=+0.188907435 container died 2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:35:22 np0005590810 systemd[1]: var-lib-containers-storage-overlay-98f55ce3338ba3d243b5b6a594bdcf1066fc9b8ac7f63215bb0cacf888fdcdc2-merged.mount: Deactivated successfully.
Jan 21 11:35:22 np0005590810 podman[262785]: 2026-01-21 16:35:22.782553471 +0000 UTC m=+0.245006563 container remove 2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Jan 21 11:35:22 np0005590810 systemd[1]: libpod-conmon-2e1606773b247ac5f7b0982606cd00d31090c30a08e807474a385dc42bde4e2f.scope: Deactivated successfully.
Jan 21 11:35:23 np0005590810 podman[262827]: 2026-01-21 16:35:22.954780209 +0000 UTC m=+0.027635012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:35:23 np0005590810 podman[262827]: 2026-01-21 16:35:23.093955636 +0000 UTC m=+0.166810409 container create 1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:35:23 np0005590810 nova_compute[251104]: 2026-01-21 16:35:23.212 251108 DEBUG nova.compute.manager [req-be4b8400-72d8-40dd-8d4e-98417c7902e2 req-be595a72-13a2-480c-8bfb-36497855a8de 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received event network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:35:23 np0005590810 nova_compute[251104]: 2026-01-21 16:35:23.212 251108 DEBUG oslo_concurrency.lockutils [req-be4b8400-72d8-40dd-8d4e-98417c7902e2 req-be595a72-13a2-480c-8bfb-36497855a8de 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:23 np0005590810 nova_compute[251104]: 2026-01-21 16:35:23.212 251108 DEBUG oslo_concurrency.lockutils [req-be4b8400-72d8-40dd-8d4e-98417c7902e2 req-be595a72-13a2-480c-8bfb-36497855a8de 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:23 np0005590810 nova_compute[251104]: 2026-01-21 16:35:23.213 251108 DEBUG oslo_concurrency.lockutils [req-be4b8400-72d8-40dd-8d4e-98417c7902e2 req-be595a72-13a2-480c-8bfb-36497855a8de 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:23 np0005590810 nova_compute[251104]: 2026-01-21 16:35:23.213 251108 DEBUG nova.compute.manager [req-be4b8400-72d8-40dd-8d4e-98417c7902e2 req-be595a72-13a2-480c-8bfb-36497855a8de 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] No waiting events found dispatching network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:35:23 np0005590810 nova_compute[251104]: 2026-01-21 16:35:23.213 251108 WARNING nova.compute.manager [req-be4b8400-72d8-40dd-8d4e-98417c7902e2 req-be595a72-13a2-480c-8bfb-36497855a8de 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received unexpected event network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 for instance with vm_state active and task_state None.#033[00m
Jan 21 11:35:23 np0005590810 systemd[1]: Started libpod-conmon-1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f.scope.
Jan 21 11:35:23 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249ffa25b63e672a1b63956f17f7d3a20697089992af917f3ff470e97e7dd77b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249ffa25b63e672a1b63956f17f7d3a20697089992af917f3ff470e97e7dd77b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249ffa25b63e672a1b63956f17f7d3a20697089992af917f3ff470e97e7dd77b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249ffa25b63e672a1b63956f17f7d3a20697089992af917f3ff470e97e7dd77b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:23 np0005590810 podman[262827]: 2026-01-21 16:35:23.3098494 +0000 UTC m=+0.382704173 container init 1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:35:23 np0005590810 podman[262827]: 2026-01-21 16:35:23.31747784 +0000 UTC m=+0.390332613 container start 1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:35:23 np0005590810 podman[262827]: 2026-01-21 16:35:23.32096762 +0000 UTC m=+0.393822393 container attach 1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 21 11:35:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:23.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:24 np0005590810 musing_yonath[262869]: [
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:    {
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "available": false,
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "being_replaced": false,
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "ceph_device_lvm": false,
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "lsm_data": {},
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "lvs": [],
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "path": "/dev/sr0",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "rejected_reasons": [
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "Has a FileSystem",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "Insufficient space (<5GB)"
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        ],
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        "sys_api": {
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "actuators": null,
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "device_nodes": [
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:                "sr0"
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            ],
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "devname": "sr0",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "human_readable_size": "482.00 KB",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "id_bus": "ata",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "model": "QEMU DVD-ROM",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "nr_requests": "2",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "parent": "/dev/sr0",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "partitions": {},
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "path": "/dev/sr0",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "removable": "1",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "rev": "2.5+",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "ro": "0",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "rotational": "1",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "sas_address": "",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "sas_device_handle": "",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "scheduler_mode": "mq-deadline",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "sectors": 0,
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "sectorsize": "2048",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "size": 493568.0,
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "support_discard": "2048",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "type": "disk",
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:            "vendor": "QEMU"
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:        }
Jan 21 11:35:24 np0005590810 musing_yonath[262869]:    }
Jan 21 11:35:24 np0005590810 musing_yonath[262869]: ]
Jan 21 11:35:24 np0005590810 systemd[1]: libpod-1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f.scope: Deactivated successfully.
Jan 21 11:35:24 np0005590810 podman[262827]: 2026-01-21 16:35:24.095738317 +0000 UTC m=+1.168593090 container died 1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 21 11:35:24 np0005590810 systemd[1]: var-lib-containers-storage-overlay-249ffa25b63e672a1b63956f17f7d3a20697089992af917f3ff470e97e7dd77b-merged.mount: Deactivated successfully.
Jan 21 11:35:24 np0005590810 podman[262827]: 2026-01-21 16:35:24.185471385 +0000 UTC m=+1.258326158 container remove 1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:35:24 np0005590810 systemd[1]: libpod-conmon-1e522137018b33d4649c26bfa987e556c09c8d1094caa03b27763567c2f8566f.scope: Deactivated successfully.
Jan 21 11:35:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 21 11:35:24 np0005590810 podman[264152]: 2026-01-21 16:35:24.226635182 +0000 UTC m=+0.102570003 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:24.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:35:24 np0005590810 nova_compute[251104]: 2026-01-21 16:35:24.269 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:24 np0005590810 nova_compute[251104]: 2026-01-21 16:35:24.464 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:24 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:24Z|00052|binding|INFO|Releasing lport 62e3e5c8-afd6-42fd-96f0-ce5d7eb2809d from this chassis (sb_readonly=0)
Jan 21 11:35:24 np0005590810 NetworkManager[48894]: <info>  [1769013324.4746] manager: (patch-provnet-b53c687f-ce80-4374-bb32-b17e6ca8f621-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Jan 21 11:35:24 np0005590810 NetworkManager[48894]: <info>  [1769013324.4754] manager: (patch-br-int-to-provnet-b53c687f-ce80-4374-bb32-b17e6ca8f621): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 21 11:35:24 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:24Z|00053|binding|INFO|Releasing lport 62e3e5c8-afd6-42fd-96f0-ce5d7eb2809d from this chassis (sb_readonly=0)
Jan 21 11:35:24 np0005590810 nova_compute[251104]: 2026-01-21 16:35:24.506 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:35:24 np0005590810 nova_compute[251104]: 2026-01-21 16:35:24.511 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:35:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:35:25 np0005590810 podman[264276]: 2026-01-21 16:35:25.15562776 +0000 UTC m=+0.047306522 container create cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:25 np0005590810 systemd[1]: Started libpod-conmon-cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e.scope.
Jan 21 11:35:25 np0005590810 nova_compute[251104]: 2026-01-21 16:35:25.196 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:25 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:25 np0005590810 podman[264276]: 2026-01-21 16:35:25.134652519 +0000 UTC m=+0.026331311 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:35:25 np0005590810 podman[264276]: 2026-01-21 16:35:25.24381901 +0000 UTC m=+0.135497792 container init cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_williamson, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:35:25 np0005590810 podman[264276]: 2026-01-21 16:35:25.252790643 +0000 UTC m=+0.144469405 container start cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_williamson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:35:25 np0005590810 heuristic_williamson[264294]: 167 167
Jan 21 11:35:25 np0005590810 nova_compute[251104]: 2026-01-21 16:35:25.258 251108 DEBUG nova.compute.manager [req-32dc5bf4-dddd-4bfb-b5ce-d6cda409f42e req-23e5cc1b-56c2-4243-a6ad-789ba865d67c 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received event network-changed-8f623775-f44e-448c-8b71-2d3cece257a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:35:25 np0005590810 nova_compute[251104]: 2026-01-21 16:35:25.260 251108 DEBUG nova.compute.manager [req-32dc5bf4-dddd-4bfb-b5ce-d6cda409f42e req-23e5cc1b-56c2-4243-a6ad-789ba865d67c 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Refreshing instance network info cache due to event network-changed-8f623775-f44e-448c-8b71-2d3cece257a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 21 11:35:25 np0005590810 systemd[1]: libpod-cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e.scope: Deactivated successfully.
Jan 21 11:35:25 np0005590810 nova_compute[251104]: 2026-01-21 16:35:25.260 251108 DEBUG oslo_concurrency.lockutils [req-32dc5bf4-dddd-4bfb-b5ce-d6cda409f42e req-23e5cc1b-56c2-4243-a6ad-789ba865d67c 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:35:25 np0005590810 nova_compute[251104]: 2026-01-21 16:35:25.260 251108 DEBUG oslo_concurrency.lockutils [req-32dc5bf4-dddd-4bfb-b5ce-d6cda409f42e req-23e5cc1b-56c2-4243-a6ad-789ba865d67c 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquired lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:35:25 np0005590810 podman[264276]: 2026-01-21 16:35:25.260686321 +0000 UTC m=+0.152365133 container attach cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:35:25 np0005590810 nova_compute[251104]: 2026-01-21 16:35:25.261 251108 DEBUG nova.network.neutron [req-32dc5bf4-dddd-4bfb-b5ce-d6cda409f42e req-23e5cc1b-56c2-4243-a6ad-789ba865d67c 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Refreshing network info cache for port 8f623775-f44e-448c-8b71-2d3cece257a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 21 11:35:25 np0005590810 podman[264276]: 2026-01-21 16:35:25.261814396 +0000 UTC m=+0.153493168 container died cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 21 11:35:25 np0005590810 systemd[1]: var-lib-containers-storage-overlay-fe8f1f4790e0d8d2035c7f689024d9f456e39b162c64b8c3b592b70ce1c780d1-merged.mount: Deactivated successfully.
Jan 21 11:35:25 np0005590810 podman[264276]: 2026-01-21 16:35:25.307673572 +0000 UTC m=+0.199352344 container remove cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_williamson, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:35:25 np0005590810 systemd[1]: libpod-conmon-cad0e95e99208a3e596db3f5bd0c50475e7f5646f852a34f4be8bbddd8bdcb9e.scope: Deactivated successfully.
Jan 21 11:35:25 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:25 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:25 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:25 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:35:25 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:25 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:25 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:35:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:25.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:25 np0005590810 podman[264319]: 2026-01-21 16:35:25.504125514 +0000 UTC m=+0.039403913 container create 110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:35:25 np0005590810 systemd[1]: Started libpod-conmon-110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af.scope.
Jan 21 11:35:25 np0005590810 podman[264319]: 2026-01-21 16:35:25.48752717 +0000 UTC m=+0.022805609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:35:25 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5e753c95a29b8aa31f70a07b3c23b3961c3c62b0666003abf46b221529a7d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5e753c95a29b8aa31f70a07b3c23b3961c3c62b0666003abf46b221529a7d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5e753c95a29b8aa31f70a07b3c23b3961c3c62b0666003abf46b221529a7d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5e753c95a29b8aa31f70a07b3c23b3961c3c62b0666003abf46b221529a7d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5e753c95a29b8aa31f70a07b3c23b3961c3c62b0666003abf46b221529a7d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:25] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Jan 21 11:35:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:25] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Jan 21 11:35:25 np0005590810 podman[264319]: 2026-01-21 16:35:25.614868773 +0000 UTC m=+0.150147202 container init 110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:25 np0005590810 podman[264319]: 2026-01-21 16:35:25.622657238 +0000 UTC m=+0.157935647 container start 110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_zhukovsky, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:35:25 np0005590810 podman[264319]: 2026-01-21 16:35:25.626099647 +0000 UTC m=+0.161378076 container attach 110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 21 11:35:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:25 np0005590810 magical_zhukovsky[264335]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:35:25 np0005590810 magical_zhukovsky[264335]: --> All data devices are unavailable
Jan 21 11:35:26 np0005590810 systemd[1]: libpod-110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af.scope: Deactivated successfully.
Jan 21 11:35:26 np0005590810 podman[264319]: 2026-01-21 16:35:26.031554905 +0000 UTC m=+0.566833314 container died 110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:35:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ea5e753c95a29b8aa31f70a07b3c23b3961c3c62b0666003abf46b221529a7d6-merged.mount: Deactivated successfully.
Jan 21 11:35:26 np0005590810 podman[264319]: 2026-01-21 16:35:26.091375771 +0000 UTC m=+0.626654180 container remove 110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:35:26 np0005590810 systemd[1]: libpod-conmon-110a2f9c2ab338fa892b05ccc60ccea8ace5d51577ef7f9622296eb3c6a311af.scope: Deactivated successfully.
Jan 21 11:35:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 21 11:35:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:26.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:26 np0005590810 nova_compute[251104]: 2026-01-21 16:35:26.413 251108 DEBUG nova.network.neutron [req-32dc5bf4-dddd-4bfb-b5ce-d6cda409f42e req-23e5cc1b-56c2-4243-a6ad-789ba865d67c 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Updated VIF entry in instance network info cache for port 8f623775-f44e-448c-8b71-2d3cece257a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 21 11:35:26 np0005590810 nova_compute[251104]: 2026-01-21 16:35:26.414 251108 DEBUG nova.network.neutron [req-32dc5bf4-dddd-4bfb-b5ce-d6cda409f42e req-23e5cc1b-56c2-4243-a6ad-789ba865d67c 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Updating instance_info_cache with network_info: [{"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:35:26 np0005590810 nova_compute[251104]: 2026-01-21 16:35:26.442 251108 DEBUG oslo_concurrency.lockutils [req-32dc5bf4-dddd-4bfb-b5ce-d6cda409f42e req-23e5cc1b-56c2-4243-a6ad-789ba865d67c 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Releasing lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:35:26 np0005590810 podman[264452]: 2026-01-21 16:35:26.717041278 +0000 UTC m=+0.044068289 container create 0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:35:26 np0005590810 systemd[1]: Started libpod-conmon-0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254.scope.
Jan 21 11:35:26 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:26 np0005590810 podman[264452]: 2026-01-21 16:35:26.698888737 +0000 UTC m=+0.025915768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:35:26 np0005590810 podman[264452]: 2026-01-21 16:35:26.816911377 +0000 UTC m=+0.143938398 container init 0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 11:35:26 np0005590810 podman[264452]: 2026-01-21 16:35:26.825761606 +0000 UTC m=+0.152788617 container start 0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_swartz, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:35:26 np0005590810 podman[264452]: 2026-01-21 16:35:26.830751023 +0000 UTC m=+0.157778064 container attach 0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_swartz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:35:26 np0005590810 boring_swartz[264470]: 167 167
Jan 21 11:35:26 np0005590810 systemd[1]: libpod-0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254.scope: Deactivated successfully.
Jan 21 11:35:26 np0005590810 podman[264452]: 2026-01-21 16:35:26.835413509 +0000 UTC m=+0.162440520 container died 0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:35:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ee43d3c92d84784c9b5fbf68842dba7ff999aa2f1483f8d77f24438c03a39426-merged.mount: Deactivated successfully.
Jan 21 11:35:26 np0005590810 podman[264452]: 2026-01-21 16:35:26.884184067 +0000 UTC m=+0.211211078 container remove 0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:26 np0005590810 systemd[1]: libpod-conmon-0b8867d45d2942de462afd894cfdaba4fddd54623906d50f2115cf4f84001254.scope: Deactivated successfully.
Jan 21 11:35:27 np0005590810 podman[264493]: 2026-01-21 16:35:27.072035997 +0000 UTC m=+0.046768796 container create 91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_keldysh, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:35:27 np0005590810 systemd[1]: Started libpod-conmon-91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124.scope.
Jan 21 11:35:27 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624d074a06845cabf0212dd623aaa7ee0b618ac51c141a1b47802b712c02b01f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:27 np0005590810 podman[264493]: 2026-01-21 16:35:27.053400039 +0000 UTC m=+0.028132858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:35:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624d074a06845cabf0212dd623aaa7ee0b618ac51c141a1b47802b712c02b01f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624d074a06845cabf0212dd623aaa7ee0b618ac51c141a1b47802b712c02b01f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:27 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624d074a06845cabf0212dd623aaa7ee0b618ac51c141a1b47802b712c02b01f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:35:27.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:35:27 np0005590810 podman[264493]: 2026-01-21 16:35:27.165588135 +0000 UTC m=+0.140320954 container init 91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 21 11:35:27 np0005590810 podman[264493]: 2026-01-21 16:35:27.17368316 +0000 UTC m=+0.148415949 container start 91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:35:27 np0005590810 podman[264493]: 2026-01-21 16:35:27.18257644 +0000 UTC m=+0.157309259 container attach 91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]: {
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:    "0": [
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:        {
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "devices": [
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "/dev/loop3"
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            ],
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "lv_name": "ceph_lv0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "lv_size": "21470642176",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "name": "ceph_lv0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "tags": {
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.cluster_name": "ceph",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.crush_device_class": "",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.encrypted": "0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.osd_id": "0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.type": "block",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.vdo": "0",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:                "ceph.with_tpm": "0"
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            },
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "type": "block",
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:            "vg_name": "ceph_vg0"
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:        }
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]:    ]
Jan 21 11:35:27 np0005590810 confident_keldysh[264509]: }
Jan 21 11:35:27 np0005590810 systemd[1]: libpod-91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124.scope: Deactivated successfully.
Jan 21 11:35:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:27 np0005590810 podman[264519]: 2026-01-21 16:35:27.536577637 +0000 UTC m=+0.033638631 container died 91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_keldysh, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:35:27 np0005590810 systemd[1]: var-lib-containers-storage-overlay-624d074a06845cabf0212dd623aaa7ee0b618ac51c141a1b47802b712c02b01f-merged.mount: Deactivated successfully.
Jan 21 11:35:27 np0005590810 podman[264519]: 2026-01-21 16:35:27.590319701 +0000 UTC m=+0.087380615 container remove 91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_keldysh, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:27 np0005590810 systemd[1]: libpod-conmon-91714d6a5ad9bea6e03b73c24709dca6da8cb99f5a9a0d362af3e37636758124.scope: Deactivated successfully.
Jan 21 11:35:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 85 op/s
Jan 21 11:35:28 np0005590810 podman[264624]: 2026-01-21 16:35:28.225242441 +0000 UTC m=+0.043321367 container create 7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_darwin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:35:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:28.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:28 np0005590810 systemd[1]: Started libpod-conmon-7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb.scope.
Jan 21 11:35:28 np0005590810 podman[264624]: 2026-01-21 16:35:28.208601547 +0000 UTC m=+0.026680493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:35:28 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:28 np0005590810 podman[264624]: 2026-01-21 16:35:28.325566013 +0000 UTC m=+0.143644969 container init 7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:35:28 np0005590810 podman[264624]: 2026-01-21 16:35:28.332557722 +0000 UTC m=+0.150636648 container start 7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 21 11:35:28 np0005590810 confident_darwin[264640]: 167 167
Jan 21 11:35:28 np0005590810 systemd[1]: libpod-7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb.scope: Deactivated successfully.
Jan 21 11:35:28 np0005590810 podman[264624]: 2026-01-21 16:35:28.33974783 +0000 UTC m=+0.157826776 container attach 7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_darwin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 21 11:35:28 np0005590810 podman[264624]: 2026-01-21 16:35:28.342105414 +0000 UTC m=+0.160184340 container died 7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:35:28 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7dcb58b2df599e24a9e0024f778aa0e0c22c2df2f2345bab8eac68583d6c0b3d-merged.mount: Deactivated successfully.
Jan 21 11:35:28 np0005590810 podman[264624]: 2026-01-21 16:35:28.400078161 +0000 UTC m=+0.218157087 container remove 7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_darwin, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:35:28 np0005590810 systemd[1]: libpod-conmon-7e73fa2604599e40fc452ffd873b257c58931bd29acb2404caf84c8d91f1bfcb.scope: Deactivated successfully.
Jan 21 11:35:28 np0005590810 podman[264667]: 2026-01-21 16:35:28.587406304 +0000 UTC m=+0.048639653 container create 7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 21 11:35:28 np0005590810 systemd[1]: Started libpod-conmon-7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235.scope.
Jan 21 11:35:28 np0005590810 podman[264667]: 2026-01-21 16:35:28.565079481 +0000 UTC m=+0.026312860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:35:28 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:35:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8780c1dcecf99d4895aa4def152b8a8e4a178c6a60a64ac253be7e26609a00d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8780c1dcecf99d4895aa4def152b8a8e4a178c6a60a64ac253be7e26609a00d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8780c1dcecf99d4895aa4def152b8a8e4a178c6a60a64ac253be7e26609a00d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:28 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8780c1dcecf99d4895aa4def152b8a8e4a178c6a60a64ac253be7e26609a00d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:35:28 np0005590810 podman[264667]: 2026-01-21 16:35:28.683926537 +0000 UTC m=+0.145159926 container init 7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:28 np0005590810 podman[264667]: 2026-01-21 16:35:28.692269169 +0000 UTC m=+0.153502528 container start 7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_elion, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 11:35:28 np0005590810 podman[264667]: 2026-01-21 16:35:28.698280909 +0000 UTC m=+0.159514288 container attach 7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_elion, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:35:29 np0005590810 nova_compute[251104]: 2026-01-21 16:35:29.271 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:29 np0005590810 lvm[264767]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:35:29 np0005590810 lvm[264767]: VG ceph_vg0 finished
Jan 21 11:35:29 np0005590810 hopeful_elion[264684]: {}
Jan 21 11:35:29 np0005590810 systemd[1]: libpod-7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235.scope: Deactivated successfully.
Jan 21 11:35:29 np0005590810 systemd[1]: libpod-7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235.scope: Consumed 1.143s CPU time.
Jan 21 11:35:29 np0005590810 podman[264667]: 2026-01-21 16:35:29.418004352 +0000 UTC m=+0.879237731 container died 7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_elion, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:35:29 np0005590810 podman[264759]: 2026-01-21 16:35:29.429120092 +0000 UTC m=+0.114132328 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:35:29 np0005590810 systemd[1]: var-lib-containers-storage-overlay-8780c1dcecf99d4895aa4def152b8a8e4a178c6a60a64ac253be7e26609a00d8-merged.mount: Deactivated successfully.
Jan 21 11:35:29 np0005590810 podman[264667]: 2026-01-21 16:35:29.473687246 +0000 UTC m=+0.934920605 container remove 7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_elion, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 21 11:35:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:29.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:29 np0005590810 systemd[1]: libpod-conmon-7ec45595b011830e912b63b2fd0849719f868e11b0bf551b2201331a0c69e235.scope: Deactivated successfully.
Jan 21 11:35:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:35:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:29 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:35:29 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:30 np0005590810 nova_compute[251104]: 2026-01-21 16:35:30.200 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 85 op/s
Jan 21 11:35:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:30 np0005590810 nova_compute[251104]: 2026-01-21 16:35:30.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:30 np0005590810 nova_compute[251104]: 2026-01-21 16:35:30.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 21 11:35:30 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:30 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:35:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:31.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 86 op/s
Jan 21 11:35:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:32.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:35:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:33.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:35:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 68 op/s
Jan 21 11:35:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:34.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:34 np0005590810 nova_compute[251104]: 2026-01-21 16:35:34.275 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:34 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:34Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:28:5b 10.100.0.4
Jan 21 11:35:34 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:34Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:28:5b 10.100.0.4
Jan 21 11:35:35 np0005590810 nova_compute[251104]: 2026-01-21 16:35:35.200 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:35 np0005590810 nova_compute[251104]: 2026-01-21 16:35:35.382 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:35:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:35:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:35] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Jan 21 11:35:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:35] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Jan 21 11:35:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 21 11:35:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:36.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:36 np0005590810 nova_compute[251104]: 2026-01-21 16:35:36.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:35:37.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:35:37 np0005590810 nova_compute[251104]: 2026-01-21 16:35:37.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:37 np0005590810 nova_compute[251104]: 2026-01-21 16:35:37.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:35:37 np0005590810 nova_compute[251104]: 2026-01-21 16:35:37.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:35:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:35:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:37.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:35:37 np0005590810 nova_compute[251104]: 2026-01-21 16:35:37.920 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:35:37 np0005590810 nova_compute[251104]: 2026-01-21 16:35:37.920 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquired lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:35:37 np0005590810 nova_compute[251104]: 2026-01-21 16:35:37.920 251108 DEBUG nova.network.neutron [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 21 11:35:37 np0005590810 nova_compute[251104]: 2026-01-21 16:35:37.921 251108 DEBUG nova.objects.instance [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 916b9de7-c0f7-499a-b45d-2b546ae37790 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:35:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:35:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:38.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.109 251108 DEBUG nova.network.neutron [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Updating instance_info_cache with network_info: [{"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.127 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Releasing lock "refresh_cache-916b9de7-c0f7-499a-b45d-2b546ae37790" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.127 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.128 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:35:39
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'images', 'default.rgw.control', '.nfs']
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:35:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:35:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.278 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.391 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 21 11:35:39 np0005590810 nova_compute[251104]: 2026-01-21 16:35:39.392 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:39.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015167019722276487 of space, bias 1.0, pg target 0.4550105916682946 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:35:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.203 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:35:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:40.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.400 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.422 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.443 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.443 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.444 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.444 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.444 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:35:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1029445738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:35:40 np0005590810 nova_compute[251104]: 2026-01-21 16:35:40.935 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.018 251108 DEBUG nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.019 251108 DEBUG nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.195 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.196 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4410MB free_disk=59.89729690551758GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.197 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.197 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.495 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Instance 916b9de7-c0f7-499a-b45d-2b546ae37790 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.495 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.496 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:35:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:41.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.675 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.724 251108 INFO nova.compute.manager [None req-745d7364-ce11-4a9e-911b-12bb0e52a244 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Get console output#033[00m
Jan 21 11:35:41 np0005590810 nova_compute[251104]: 2026-01-21 16:35:41.733 260713 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.056 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "916b9de7-c0f7-499a-b45d-2b546ae37790" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.057 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.057 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.057 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.058 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.059 251108 INFO nova.compute.manager [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Terminating instance#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.061 251108 DEBUG nova.compute.manager [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 21 11:35:42 np0005590810 kernel: tap8f623775-f4 (unregistering): left promiscuous mode
Jan 21 11:35:42 np0005590810 NetworkManager[48894]: <info>  [1769013342.1198] device (tap8f623775-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 21 11:35:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:35:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/458443741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:35:42 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:42Z|00054|binding|INFO|Releasing lport 8f623775-f44e-448c-8b71-2d3cece257a2 from this chassis (sb_readonly=0)
Jan 21 11:35:42 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:42Z|00055|binding|INFO|Setting lport 8f623775-f44e-448c-8b71-2d3cece257a2 down in Southbound
Jan 21 11:35:42 np0005590810 ovn_controller[152632]: 2026-01-21T16:35:42Z|00056|binding|INFO|Removing iface tap8f623775-f4 ovn-installed in OVS
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.131 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.149 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.152 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:28:5b 10.100.0.4'], port_security=['fa:16:3e:86:28:5b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '916b9de7-c0f7-499a-b45d-2b546ae37790', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d6214185b004f9c9798abfc29d1ae14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89656e6d-0ecf-4dec-802e-454a01b90ef3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a5c18f9-d28d-4536-a1f2-7252480fabee, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], logical_port=8f623775-f44e-448c-8b71-2d3cece257a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.154 163593 INFO neutron.agent.ovn.metadata.agent [-] Port 8f623775-f44e-448c-8b71-2d3cece257a2 in datapath aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67 unbound from our chassis#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.163 163593 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.165 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[aca4b39d-ff3b-4b5d-84ab-212e2066877d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.166 163593 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67 namespace which is not needed anymore#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.177 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.184 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:35:42 np0005590810 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 21 11:35:42 np0005590810 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 14.125s CPU time.
Jan 21 11:35:42 np0005590810 systemd-machined[217254]: Machine qemu-2-instance-00000005 terminated.
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.211 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:35:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.232 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.232 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:42.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.281 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.288 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.297 251108 INFO nova.virt.libvirt.driver [-] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Instance destroyed successfully.#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.298 251108 DEBUG nova.objects.instance [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'resources' on Instance uuid 916b9de7-c0f7-499a-b45d-2b546ae37790 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:35:42 np0005590810 neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67[262598]: [NOTICE]   (262602) : haproxy version is 2.8.14-c23fe91
Jan 21 11:35:42 np0005590810 neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67[262598]: [NOTICE]   (262602) : path to executable is /usr/sbin/haproxy
Jan 21 11:35:42 np0005590810 neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67[262598]: [WARNING]  (262602) : Exiting Master process...
Jan 21 11:35:42 np0005590810 neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67[262598]: [WARNING]  (262602) : Exiting Master process...
Jan 21 11:35:42 np0005590810 neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67[262598]: [ALERT]    (262602) : Current worker (262604) exited with code 143 (Terminated)
Jan 21 11:35:42 np0005590810 neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67[262598]: [WARNING]  (262602) : All workers exited. Exiting... (0)
Jan 21 11:35:42 np0005590810 systemd[1]: libpod-dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5.scope: Deactivated successfully.
Jan 21 11:35:42 np0005590810 podman[264910]: 2026-01-21 16:35:42.311622393 +0000 UTC m=+0.047368563 container died dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.313 251108 DEBUG nova.virt.libvirt.vif [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-21T16:35:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1304058228',display_name='tempest-TestNetworkBasicOps-server-1304058228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1304058228',id=5,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAODY0oPCHyc4+PjQ1725+nevOCWRrwWD4hHxtkHr9gLP39zHubzKPjYJSqNgg3dm+06/jcbQU4KzBZGc283KicJsmRyBqlgO57i4dMI+5UaV5ILKMUSPd1Pbh0vRFjh0g==',key_name='tempest-TestNetworkBasicOps-53645135',keypairs=<?>,launch_index=0,launched_at=2026-01-21T16:35:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-qi8jqxbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-21T16:35:21Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=916b9de7-c0f7-499a-b45d-2b546ae37790,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.314 251108 DEBUG nova.network.os_vif_util [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "8f623775-f44e-448c-8b71-2d3cece257a2", "address": "fa:16:3e:86:28:5b", "network": {"id": "aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67", "bridge": "br-int", "label": "tempest-network-smoke--941394341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f623775-f4", "ovs_interfaceid": "8f623775-f44e-448c-8b71-2d3cece257a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.315 251108 DEBUG nova.network.os_vif_util [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:28:5b,bridge_name='br-int',has_traffic_filtering=True,id=8f623775-f44e-448c-8b71-2d3cece257a2,network=Network(aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f623775-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.315 251108 DEBUG os_vif [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:28:5b,bridge_name='br-int',has_traffic_filtering=True,id=8f623775-f44e-448c-8b71-2d3cece257a2,network=Network(aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f623775-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.317 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.318 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f623775-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.321 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.324 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.325 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.328 251108 INFO os_vif [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:28:5b,bridge_name='br-int',has_traffic_filtering=True,id=8f623775-f44e-448c-8b71-2d3cece257a2,network=Network(aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f623775-f4')#033[00m
Jan 21 11:35:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay-08477096d1347d49652bfcb20aecdee0f1269fce6683b3735c6ad0818a21d89e-merged.mount: Deactivated successfully.
Jan 21 11:35:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5-userdata-shm.mount: Deactivated successfully.
Jan 21 11:35:42 np0005590810 podman[264910]: 2026-01-21 16:35:42.35879525 +0000 UTC m=+0.094541420 container cleanup dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:35:42 np0005590810 systemd[1]: libpod-conmon-dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5.scope: Deactivated successfully.
Jan 21 11:35:42 np0005590810 podman[264966]: 2026-01-21 16:35:42.45874414 +0000 UTC m=+0.073239589 container remove dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.465 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[caf3bc20-5f0f-4913-b6d9-17f946b92249]: (4, ('Wed Jan 21 04:35:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67 (dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5)\ndd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5\nWed Jan 21 04:35:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67 (dd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5)\ndd50e342a96eadb224440cfe1de726b243bb344bbea2cc71b38bb3ecb1484cf5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.468 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[78745e6e-d735-4b5e-8a9f-c631ec61bfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.470 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa5f9bb7-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:42 np0005590810 kernel: tapaa5f9bb7-f0: left promiscuous mode
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.472 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.474 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.477 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[92f0a93d-7f57-49d8-bae8-3290ac113d9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:42 np0005590810 nova_compute[251104]: 2026-01-21 16:35:42.490 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.496 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[c242bfea-1e10-4666-a477-85469f301d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.498 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed29e55-1894-45f1-b8b9-d0606244e59e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.513 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbc9069-6755-44fe-9eed-0b8deea65742]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453684, 'reachable_time': 17290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264984, 'error': None, 'target': 'ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.516 163844 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa5f9bb7-fb3a-4cdc-bae7-e2d2a0857c67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 21 11:35:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:42.517 163844 DEBUG oslo.privsep.daemon [-] privsep: reply[29e4248b-7275-462c-b16c-59499ef6632f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:35:42 np0005590810 systemd[1]: run-netns-ovnmeta\x2daa5f9bb7\x2dfb3a\x2d4cdc\x2dbae7\x2de2d2a0857c67.mount: Deactivated successfully.
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.178 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.179 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.179 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.179 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:35:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:43.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.539 251108 INFO nova.virt.libvirt.driver [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Deleting instance files /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790_del#033[00m
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.541 251108 INFO nova.virt.libvirt.driver [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Deletion of /var/lib/nova/instances/916b9de7-c0f7-499a-b45d-2b546ae37790_del complete#033[00m
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.594 251108 INFO nova.compute.manager [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Took 1.53 seconds to destroy the instance on the hypervisor.#033[00m
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.595 251108 DEBUG oslo.service.loopingcall [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.595 251108 DEBUG nova.compute.manager [-] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 21 11:35:43 np0005590810 nova_compute[251104]: 2026-01-21 16:35:43.595 251108 DEBUG nova.network.neutron [-] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.068 251108 DEBUG nova.compute.manager [req-4273de0d-e905-4b09-a629-e5ea7fd925f8 req-34b3a0c9-4203-4b5d-8938-710c08f812d7 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received event network-vif-unplugged-8f623775-f44e-448c-8b71-2d3cece257a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.068 251108 DEBUG oslo_concurrency.lockutils [req-4273de0d-e905-4b09-a629-e5ea7fd925f8 req-34b3a0c9-4203-4b5d-8938-710c08f812d7 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.068 251108 DEBUG oslo_concurrency.lockutils [req-4273de0d-e905-4b09-a629-e5ea7fd925f8 req-34b3a0c9-4203-4b5d-8938-710c08f812d7 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.068 251108 DEBUG oslo_concurrency.lockutils [req-4273de0d-e905-4b09-a629-e5ea7fd925f8 req-34b3a0c9-4203-4b5d-8938-710c08f812d7 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.069 251108 DEBUG nova.compute.manager [req-4273de0d-e905-4b09-a629-e5ea7fd925f8 req-34b3a0c9-4203-4b5d-8938-710c08f812d7 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] No waiting events found dispatching network-vif-unplugged-8f623775-f44e-448c-8b71-2d3cece257a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.069 251108 DEBUG nova.compute.manager [req-4273de0d-e905-4b09-a629-e5ea7fd925f8 req-34b3a0c9-4203-4b5d-8938-710c08f812d7 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received event network-vif-unplugged-8f623775-f44e-448c-8b71-2d3cece257a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 21 11:35:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:35:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:44.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.792 251108 DEBUG nova.network.neutron [-] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.821 251108 INFO nova.compute.manager [-] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Took 1.23 seconds to deallocate network for instance.#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.886 251108 DEBUG nova.compute.manager [req-bbb7b14f-f80e-4bce-94f8-431a799937a6 req-c15c41ae-8c72-4092-8a66-cf09a1e38484 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received event network-vif-deleted-8f623775-f44e-448c-8b71-2d3cece257a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.905 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.906 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:44 np0005590810 nova_compute[251104]: 2026-01-21 16:35:44.947 251108 DEBUG oslo_concurrency.processutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:35:45 np0005590810 nova_compute[251104]: 2026-01-21 16:35:45.205 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:35:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857361136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:35:45 np0005590810 nova_compute[251104]: 2026-01-21 16:35:45.427 251108 DEBUG oslo_concurrency.processutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:35:45 np0005590810 nova_compute[251104]: 2026-01-21 16:35:45.433 251108 DEBUG nova.compute.provider_tree [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:35:45 np0005590810 nova_compute[251104]: 2026-01-21 16:35:45.455 251108 DEBUG nova.scheduler.client.report [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:35:45 np0005590810 nova_compute[251104]: 2026-01-21 16:35:45.480 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:45 np0005590810 nova_compute[251104]: 2026-01-21 16:35:45.516 251108 INFO nova.scheduler.client.report [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Deleted allocations for instance 916b9de7-c0f7-499a-b45d-2b546ae37790#033[00m
Jan 21 11:35:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:45.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:45 np0005590810 nova_compute[251104]: 2026-01-21 16:35:45.591 251108 DEBUG oslo_concurrency.lockutils [None req-188f497c-2586-4650-81b8-620c3f21cb3d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:35:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:45] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:35:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:46 np0005590810 nova_compute[251104]: 2026-01-21 16:35:46.164 251108 DEBUG nova.compute.manager [req-45065753-eafc-44af-92ba-2511bbc1ac6e req-f562e5e3-6f75-4bc5-9b3c-b3c68bbf566d 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received event network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:35:46 np0005590810 nova_compute[251104]: 2026-01-21 16:35:46.165 251108 DEBUG oslo_concurrency.lockutils [req-45065753-eafc-44af-92ba-2511bbc1ac6e req-f562e5e3-6f75-4bc5-9b3c-b3c68bbf566d 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:35:46 np0005590810 nova_compute[251104]: 2026-01-21 16:35:46.165 251108 DEBUG oslo_concurrency.lockutils [req-45065753-eafc-44af-92ba-2511bbc1ac6e req-f562e5e3-6f75-4bc5-9b3c-b3c68bbf566d 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:35:46 np0005590810 nova_compute[251104]: 2026-01-21 16:35:46.165 251108 DEBUG oslo_concurrency.lockutils [req-45065753-eafc-44af-92ba-2511bbc1ac6e req-f562e5e3-6f75-4bc5-9b3c-b3c68bbf566d 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "916b9de7-c0f7-499a-b45d-2b546ae37790-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:35:46 np0005590810 nova_compute[251104]: 2026-01-21 16:35:46.165 251108 DEBUG nova.compute.manager [req-45065753-eafc-44af-92ba-2511bbc1ac6e req-f562e5e3-6f75-4bc5-9b3c-b3c68bbf566d 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] No waiting events found dispatching network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:35:46 np0005590810 nova_compute[251104]: 2026-01-21 16:35:46.165 251108 WARNING nova.compute.manager [req-45065753-eafc-44af-92ba-2511bbc1ac6e req-f562e5e3-6f75-4bc5-9b3c-b3c68bbf566d 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Received unexpected event network-vif-plugged-8f623775-f44e-448c-8b71-2d3cece257a2 for instance with vm_state deleted and task_state None.#033[00m
Jan 21 11:35:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 21 11:35:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:46.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.722509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013346722597, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2129, "num_deletes": 251, "total_data_size": 4236820, "memory_usage": 4302256, "flush_reason": "Manual Compaction"}
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013346745116, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4102057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24768, "largest_seqno": 26896, "table_properties": {"data_size": 4092573, "index_size": 5978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19767, "raw_average_key_size": 20, "raw_value_size": 4073605, "raw_average_value_size": 4190, "num_data_blocks": 262, "num_entries": 972, "num_filter_entries": 972, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769013140, "oldest_key_time": 1769013140, "file_creation_time": 1769013346, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 22651 microseconds, and 9527 cpu microseconds.
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.745175) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4102057 bytes OK
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.745201) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.748148) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.748164) EVENT_LOG_v1 {"time_micros": 1769013346748158, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.748186) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4228194, prev total WAL file size 4228194, number of live WAL files 2.
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.749609) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(4005KB)], [56(10MB)]
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013346749692, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 15463610, "oldest_snapshot_seqno": -1}
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5798 keys, 13276667 bytes, temperature: kUnknown
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013346825642, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 13276667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13236667, "index_size": 24402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147315, "raw_average_key_size": 25, "raw_value_size": 13130721, "raw_average_value_size": 2264, "num_data_blocks": 998, "num_entries": 5798, "num_filter_entries": 5798, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769013346, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.825928) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 13276667 bytes
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.828816) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.4 rd, 174.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 10.8 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 6318, records dropped: 520 output_compression: NoCompression
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.828834) EVENT_LOG_v1 {"time_micros": 1769013346828825, "job": 30, "event": "compaction_finished", "compaction_time_micros": 76034, "compaction_time_cpu_micros": 27301, "output_level": 6, "num_output_files": 1, "total_output_size": 13276667, "num_input_records": 6318, "num_output_records": 5798, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013346829519, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013346831622, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.749527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.831763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.831772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.831774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.831777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:35:46 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:35:46.831779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:35:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:35:47.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:35:47 np0005590810 nova_compute[251104]: 2026-01-21 16:35:47.322 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:47.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 21 11:35:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:48.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:49.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:50 np0005590810 nova_compute[251104]: 2026-01-21 16:35:50.208 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 21 11:35:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:50.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:51.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:51 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:51.927 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:35:51 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:51.928 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:35:51 np0005590810 nova_compute[251104]: 2026-01-21 16:35:51.930 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 57 op/s
Jan 21 11:35:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:52.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:52 np0005590810 nova_compute[251104]: 2026-01-21 16:35:52.324 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:53.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:53 np0005590810 nova_compute[251104]: 2026-01-21 16:35:53.537 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:53 np0005590810 nova_compute[251104]: 2026-01-21 16:35:53.628 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:53 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:35:53.931 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:35:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Jan 21 11:35:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:35:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:35:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:54.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:54 np0005590810 podman[265047]: 2026-01-21 16:35:54.684700109 +0000 UTC m=+0.062956125 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:35:55 np0005590810 nova_compute[251104]: 2026-01-21 16:35:55.210 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:55.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:55] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 21 11:35:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:35:55] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 21 11:35:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:35:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.3 KiB/s wr, 57 op/s
Jan 21 11:35:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:56.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:35:57.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:35:57 np0005590810 nova_compute[251104]: 2026-01-21 16:35:57.295 251108 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769013342.2941213, 916b9de7-c0f7-499a-b45d-2b546ae37790 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:35:57 np0005590810 nova_compute[251104]: 2026-01-21 16:35:57.295 251108 INFO nova.compute.manager [-] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] VM Stopped (Lifecycle Event)#033[00m
Jan 21 11:35:57 np0005590810 nova_compute[251104]: 2026-01-21 16:35:57.328 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:35:57 np0005590810 nova_compute[251104]: 2026-01-21 16:35:57.375 251108 DEBUG nova.compute.manager [None req-4a497f6f-3be4-42f7-b9e8-d1350ab187c4 - - - - - -] [instance: 916b9de7-c0f7-499a-b45d-2b546ae37790] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:35:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:57.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:35:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:35:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:35:58.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:35:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:35:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:35:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:35:59.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:35:59 np0005590810 podman[265073]: 2026-01-21 16:35:59.719610167 +0000 UTC m=+0.091882746 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 21 11:36:00 np0005590810 nova_compute[251104]: 2026-01-21 16:36:00.213 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:36:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:00.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:01.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 21 11:36:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:02.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:02 np0005590810 nova_compute[251104]: 2026-01-21 16:36:02.331 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:03.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:36:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:04.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:05 np0005590810 nova_compute[251104]: 2026-01-21 16:36:05.214 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:36:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:05.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:36:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 21 11:36:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:05] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 21 11:36:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:36:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:06.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:07.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:36:07 np0005590810 nova_compute[251104]: 2026-01-21 16:36:07.335 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:07.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:36:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:08.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:36:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:36:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:36:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:36:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:09.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:36:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:36:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:36:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:36:10 np0005590810 nova_compute[251104]: 2026-01-21 16:36:10.216 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:36:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:10.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:11.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:36:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:12.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:12 np0005590810 nova_compute[251104]: 2026-01-21 16:36:12.338 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:13.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:36:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:14.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:15 np0005590810 nova_compute[251104]: 2026-01-21 16:36:15.217 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:36:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:15.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:36:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:15] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 21 11:36:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:15] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 21 11:36:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:16.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 MiB/s wr, 28 op/s
Jan 21 11:36:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:17.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:36:17 np0005590810 nova_compute[251104]: 2026-01-21 16:36:17.341 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:17.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 21 11:36:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:19.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:20 np0005590810 nova_compute[251104]: 2026-01-21 16:36:20.090 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:20 np0005590810 nova_compute[251104]: 2026-01-21 16:36:20.220 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:36:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:20.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:36:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 21 11:36:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:21.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:36:22.024 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:36:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:36:22.025 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:36:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:36:22.025 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:36:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:22.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:22 np0005590810 nova_compute[251104]: 2026-01-21 16:36:22.344 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 21 11:36:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:23.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:36:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:36:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:24.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 21 11:36:25 np0005590810 nova_compute[251104]: 2026-01-21 16:36:25.222 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:25] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Jan 21 11:36:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:25] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Jan 21 11:36:25 np0005590810 podman[265177]: 2026-01-21 16:36:25.691798999 +0000 UTC m=+0.059837137 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:36:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 21 11:36:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:27.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:36:27 np0005590810 nova_compute[251104]: 2026-01-21 16:36:27.348 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 21 11:36:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:29.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:30 np0005590810 podman[265224]: 2026-01-21 16:36:30.024969061 +0000 UTC m=+0.089521393 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:36:30 np0005590810 nova_compute[251104]: 2026-01-21 16:36:30.223 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 21 11:36:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:36:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:36:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:36:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:36:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:36:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:36:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:36:31 np0005590810 ovn_controller[152632]: 2026-01-21T16:36:31Z|00057|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:36:31 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:36:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:31.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:31 np0005590810 podman[265398]: 2026-01-21 16:36:31.578039586 +0000 UTC m=+0.040573209 container create b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_dewdney, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:36:31 np0005590810 systemd[1]: Started libpod-conmon-b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b.scope.
Jan 21 11:36:31 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:36:31 np0005590810 podman[265398]: 2026-01-21 16:36:31.559986838 +0000 UTC m=+0.022520481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:36:31 np0005590810 podman[265398]: 2026-01-21 16:36:31.665607726 +0000 UTC m=+0.128141369 container init b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_dewdney, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:36:31 np0005590810 podman[265398]: 2026-01-21 16:36:31.674411724 +0000 UTC m=+0.136945347 container start b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_dewdney, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:36:31 np0005590810 romantic_dewdney[265414]: 167 167
Jan 21 11:36:31 np0005590810 systemd[1]: libpod-b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b.scope: Deactivated successfully.
Jan 21 11:36:31 np0005590810 conmon[265414]: conmon b7db279ce6d4214ee26c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b.scope/container/memory.events
Jan 21 11:36:31 np0005590810 podman[265398]: 2026-01-21 16:36:31.683189371 +0000 UTC m=+0.145723084 container attach b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_dewdney, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 11:36:31 np0005590810 podman[265398]: 2026-01-21 16:36:31.683601984 +0000 UTC m=+0.146135607 container died b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_dewdney, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:36:31 np0005590810 systemd[1]: var-lib-containers-storage-overlay-1ac99fa3ce69f862f2597d531d007aae08587e2b368969a94bb154ee0c48cd1e-merged.mount: Deactivated successfully.
Jan 21 11:36:31 np0005590810 podman[265398]: 2026-01-21 16:36:31.723843381 +0000 UTC m=+0.186377004 container remove b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_dewdney, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:36:31 np0005590810 systemd[1]: libpod-conmon-b7db279ce6d4214ee26cddeb5de6f06e94ee59198261a2b08b1e50630857183b.scope: Deactivated successfully.
Jan 21 11:36:31 np0005590810 podman[265440]: 2026-01-21 16:36:31.900087596 +0000 UTC m=+0.048070416 container create 85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:36:31 np0005590810 systemd[1]: Started libpod-conmon-85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c.scope.
Jan 21 11:36:31 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:36:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487761d2fb3c6cf21d2cc0aa48b2988cde7087a57fe9a7ad60dd827d092f0893/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487761d2fb3c6cf21d2cc0aa48b2988cde7087a57fe9a7ad60dd827d092f0893/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487761d2fb3c6cf21d2cc0aa48b2988cde7087a57fe9a7ad60dd827d092f0893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:31 np0005590810 podman[265440]: 2026-01-21 16:36:31.880938853 +0000 UTC m=+0.028921693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:36:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487761d2fb3c6cf21d2cc0aa48b2988cde7087a57fe9a7ad60dd827d092f0893/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:31 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487761d2fb3c6cf21d2cc0aa48b2988cde7087a57fe9a7ad60dd827d092f0893/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:31 np0005590810 podman[265440]: 2026-01-21 16:36:31.991095095 +0000 UTC m=+0.139077915 container init 85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 21 11:36:32 np0005590810 podman[265440]: 2026-01-21 16:36:32.008118751 +0000 UTC m=+0.156101571 container start 85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:36:32 np0005590810 podman[265440]: 2026-01-21 16:36:32.012977884 +0000 UTC m=+0.160960704 container attach 85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:36:32 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:36:32 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:36:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:32 np0005590810 wizardly_boyd[265458]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:36:32 np0005590810 wizardly_boyd[265458]: --> All data devices are unavailable
Jan 21 11:36:32 np0005590810 nova_compute[251104]: 2026-01-21 16:36:32.352 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:32 np0005590810 systemd[1]: libpod-85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c.scope: Deactivated successfully.
Jan 21 11:36:32 np0005590810 conmon[265458]: conmon 85ef29d1c11cb275cf0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c.scope/container/memory.events
Jan 21 11:36:32 np0005590810 podman[265440]: 2026-01-21 16:36:32.385722521 +0000 UTC m=+0.533705341 container died 85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:36:32 np0005590810 systemd[1]: var-lib-containers-storage-overlay-487761d2fb3c6cf21d2cc0aa48b2988cde7087a57fe9a7ad60dd827d092f0893-merged.mount: Deactivated successfully.
Jan 21 11:36:32 np0005590810 podman[265440]: 2026-01-21 16:36:32.429914434 +0000 UTC m=+0.577897264 container remove 85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 11:36:32 np0005590810 systemd[1]: libpod-conmon-85ef29d1c11cb275cf0e5ea0bb311377f4444e62392fd67800cdb1003a81929c.scope: Deactivated successfully.
Jan 21 11:36:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 21 11:36:33 np0005590810 podman[265579]: 2026-01-21 16:36:33.009023505 +0000 UTC m=+0.044420001 container create 60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_benz, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:36:33 np0005590810 systemd[1]: Started libpod-conmon-60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160.scope.
Jan 21 11:36:33 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:36:33 np0005590810 podman[265579]: 2026-01-21 16:36:32.988653522 +0000 UTC m=+0.024050038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:36:33 np0005590810 podman[265579]: 2026-01-21 16:36:33.097805122 +0000 UTC m=+0.133201638 container init 60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_benz, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:36:33 np0005590810 podman[265579]: 2026-01-21 16:36:33.103377908 +0000 UTC m=+0.138774404 container start 60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_benz, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:36:33 np0005590810 podman[265579]: 2026-01-21 16:36:33.106858498 +0000 UTC m=+0.142255014 container attach 60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 11:36:33 np0005590810 interesting_benz[265595]: 167 167
Jan 21 11:36:33 np0005590810 systemd[1]: libpod-60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160.scope: Deactivated successfully.
Jan 21 11:36:33 np0005590810 podman[265579]: 2026-01-21 16:36:33.109793271 +0000 UTC m=+0.145189767 container died 60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 21 11:36:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f6d7d0019af0d8ed0046593d062152c2c9a25fe44bc8ee4fdc8e0f375f397fdf-merged.mount: Deactivated successfully.
Jan 21 11:36:33 np0005590810 podman[265579]: 2026-01-21 16:36:33.147893852 +0000 UTC m=+0.183290348 container remove 60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 21 11:36:33 np0005590810 systemd[1]: libpod-conmon-60f808869245ad68c1ba93222240ac1fc89adf0b97f3502aefb96173516ed160.scope: Deactivated successfully.
Jan 21 11:36:33 np0005590810 podman[265623]: 2026-01-21 16:36:33.316055501 +0000 UTC m=+0.048625923 container create f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_kepler, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 21 11:36:33 np0005590810 systemd[1]: Started libpod-conmon-f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52.scope.
Jan 21 11:36:33 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:36:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5c1abcf4d41ced280a6d9dcd3af9e6c65d26989cbf7972da60f00ba286b33e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:33 np0005590810 podman[265623]: 2026-01-21 16:36:33.294299505 +0000 UTC m=+0.026869967 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:36:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5c1abcf4d41ced280a6d9dcd3af9e6c65d26989cbf7972da60f00ba286b33e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5c1abcf4d41ced280a6d9dcd3af9e6c65d26989cbf7972da60f00ba286b33e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5c1abcf4d41ced280a6d9dcd3af9e6c65d26989cbf7972da60f00ba286b33e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:33 np0005590810 podman[265623]: 2026-01-21 16:36:33.398888152 +0000 UTC m=+0.131458594 container init f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:36:33 np0005590810 podman[265623]: 2026-01-21 16:36:33.409287989 +0000 UTC m=+0.141858401 container start f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Jan 21 11:36:33 np0005590810 podman[265623]: 2026-01-21 16:36:33.412662756 +0000 UTC m=+0.145233178 container attach f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 11:36:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:33.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:33 np0005590810 cool_kepler[265639]: {
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:    "0": [
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:        {
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "devices": [
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "/dev/loop3"
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            ],
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "lv_name": "ceph_lv0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "lv_size": "21470642176",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "name": "ceph_lv0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "tags": {
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.cluster_name": "ceph",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.crush_device_class": "",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.encrypted": "0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.osd_id": "0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.type": "block",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.vdo": "0",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:                "ceph.with_tpm": "0"
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            },
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "type": "block",
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:            "vg_name": "ceph_vg0"
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:        }
Jan 21 11:36:33 np0005590810 cool_kepler[265639]:    ]
Jan 21 11:36:33 np0005590810 cool_kepler[265639]: }
Jan 21 11:36:33 np0005590810 systemd[1]: libpod-f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52.scope: Deactivated successfully.
Jan 21 11:36:33 np0005590810 podman[265623]: 2026-01-21 16:36:33.733064694 +0000 UTC m=+0.465635106 container died f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_kepler, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 11:36:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ff5c1abcf4d41ced280a6d9dcd3af9e6c65d26989cbf7972da60f00ba286b33e-merged.mount: Deactivated successfully.
Jan 21 11:36:33 np0005590810 podman[265623]: 2026-01-21 16:36:33.783317108 +0000 UTC m=+0.515887530 container remove f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_kepler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:36:33 np0005590810 systemd[1]: libpod-conmon-f5e50dcb756623d992aff0cc579a68b3d764e321b0091e5361920a039f64ce52.scope: Deactivated successfully.
Jan 21 11:36:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:34 np0005590810 podman[265752]: 2026-01-21 16:36:34.424387621 +0000 UTC m=+0.038085481 container create 41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_golick, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 11:36:34 np0005590810 systemd[1]: Started libpod-conmon-41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8.scope.
Jan 21 11:36:34 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:36:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 59 op/s
Jan 21 11:36:34 np0005590810 podman[265752]: 2026-01-21 16:36:34.497799524 +0000 UTC m=+0.111497404 container init 41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 11:36:34 np0005590810 podman[265752]: 2026-01-21 16:36:34.408535181 +0000 UTC m=+0.022233061 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:36:34 np0005590810 podman[265752]: 2026-01-21 16:36:34.505651823 +0000 UTC m=+0.119349683 container start 41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 11:36:34 np0005590810 podman[265752]: 2026-01-21 16:36:34.509338898 +0000 UTC m=+0.123036778 container attach 41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_golick, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:36:34 np0005590810 gracious_golick[265768]: 167 167
Jan 21 11:36:34 np0005590810 systemd[1]: libpod-41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8.scope: Deactivated successfully.
Jan 21 11:36:34 np0005590810 podman[265752]: 2026-01-21 16:36:34.512797097 +0000 UTC m=+0.126494947 container died 41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:36:34 np0005590810 systemd[1]: var-lib-containers-storage-overlay-2ec330112367f6992ef2fabc6b3eaf6d16ccad9e85a18fc1a3e554495ef1bbd5-merged.mount: Deactivated successfully.
Jan 21 11:36:34 np0005590810 podman[265752]: 2026-01-21 16:36:34.556147744 +0000 UTC m=+0.169845604 container remove 41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_golick, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:36:34 np0005590810 systemd[1]: libpod-conmon-41cac6e734db9094b8076afab5547720a57ca7638b8c585b91f035b50509d3a8.scope: Deactivated successfully.
Jan 21 11:36:34 np0005590810 podman[265796]: 2026-01-21 16:36:34.734247187 +0000 UTC m=+0.047876430 container create ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:36:34 np0005590810 systemd[1]: Started libpod-conmon-ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383.scope.
Jan 21 11:36:34 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:36:34 np0005590810 podman[265796]: 2026-01-21 16:36:34.711989765 +0000 UTC m=+0.025619038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:36:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc6df1b6425856cb36f44c752e80f15d91ff6821e5b4a4ea960c78759a5257e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc6df1b6425856cb36f44c752e80f15d91ff6821e5b4a4ea960c78759a5257e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc6df1b6425856cb36f44c752e80f15d91ff6821e5b4a4ea960c78759a5257e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:34 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc6df1b6425856cb36f44c752e80f15d91ff6821e5b4a4ea960c78759a5257e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:36:34 np0005590810 podman[265796]: 2026-01-21 16:36:34.827511856 +0000 UTC m=+0.141141119 container init ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:36:34 np0005590810 podman[265796]: 2026-01-21 16:36:34.834910049 +0000 UTC m=+0.148539292 container start ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:36:34 np0005590810 podman[265796]: 2026-01-21 16:36:34.838898325 +0000 UTC m=+0.152527608 container attach ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:36:35 np0005590810 nova_compute[251104]: 2026-01-21 16:36:35.225 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:35 np0005590810 nova_compute[251104]: 2026-01-21 16:36:35.382 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:35 np0005590810 lvm[265888]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:36:35 np0005590810 lvm[265888]: VG ceph_vg0 finished
Jan 21 11:36:35 np0005590810 hardcore_keller[265812]: {}
Jan 21 11:36:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:35.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:35] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Jan 21 11:36:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:35] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Jan 21 11:36:35 np0005590810 systemd[1]: libpod-ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383.scope: Deactivated successfully.
Jan 21 11:36:35 np0005590810 podman[265796]: 2026-01-21 16:36:35.610030707 +0000 UTC m=+0.923659960 container died ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_keller, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 21 11:36:35 np0005590810 systemd[1]: libpod-ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383.scope: Consumed 1.281s CPU time.
Jan 21 11:36:35 np0005590810 systemd[1]: var-lib-containers-storage-overlay-fdc6df1b6425856cb36f44c752e80f15d91ff6821e5b4a4ea960c78759a5257e-merged.mount: Deactivated successfully.
Jan 21 11:36:35 np0005590810 podman[265796]: 2026-01-21 16:36:35.662984616 +0000 UTC m=+0.976613859 container remove ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_keller, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:36:35 np0005590810 systemd[1]: libpod-conmon-ea068a5d1d5dcd3edfd6b46fc763f4b78804df594e75ac57923a3498010f4383.scope: Deactivated successfully.
Jan 21 11:36:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:36:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:36:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:36:35 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:36:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:36 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:36:36 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:36:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 21 11:36:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:37.168Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:36:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:37.168Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:36:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:37.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:36:37 np0005590810 nova_compute[251104]: 2026-01-21 16:36:37.357 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:37 np0005590810 nova_compute[251104]: 2026-01-21 16:36:37.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:37.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:38 np0005590810 nova_compute[251104]: 2026-01-21 16:36:38.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:38 np0005590810 nova_compute[251104]: 2026-01-21 16:36:38.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:36:38 np0005590810 nova_compute[251104]: 2026-01-21 16:36:38.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:36:38 np0005590810 nova_compute[251104]: 2026-01-21 16:36:38.383 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:36:38 np0005590810 nova_compute[251104]: 2026-01-21 16:36:38.383 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:36:39
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', 'vms']
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:36:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:36:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:36:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:39.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:36:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:36:40 np0005590810 nova_compute[251104]: 2026-01-21 16:36:40.227 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:40 np0005590810 nova_compute[251104]: 2026-01-21 16:36:40.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 21 11:36:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:41.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:42 np0005590810 nova_compute[251104]: 2026-01-21 16:36:42.361 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:42 np0005590810 nova_compute[251104]: 2026-01-21 16:36:42.367 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:42 np0005590810 nova_compute[251104]: 2026-01-21 16:36:42.402 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:36:42 np0005590810 nova_compute[251104]: 2026-01-21 16:36:42.403 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:36:42 np0005590810 nova_compute[251104]: 2026-01-21 16:36:42.403 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:36:42 np0005590810 nova_compute[251104]: 2026-01-21 16:36:42.403 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:36:42 np0005590810 nova_compute[251104]: 2026-01-21 16:36:42.403 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:36:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:36:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:36:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428051816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:36:42 np0005590810 nova_compute[251104]: 2026-01-21 16:36:42.871 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.087 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.089 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4647MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.089 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.090 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.158 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.159 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.180 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:36:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:43.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:36:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650881207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.654 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.662 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.686 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.712 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:36:43 np0005590810 nova_compute[251104]: 2026-01-21 16:36:43.712 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:36:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:36:44 np0005590810 nova_compute[251104]: 2026-01-21 16:36:44.713 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:44 np0005590810 nova_compute[251104]: 2026-01-21 16:36:44.714 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:44 np0005590810 nova_compute[251104]: 2026-01-21 16:36:44.714 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:36:44 np0005590810 nova_compute[251104]: 2026-01-21 16:36:44.714 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:36:45 np0005590810 nova_compute[251104]: 2026-01-21 16:36:45.230 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:45.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:45] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Jan 21 11:36:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:45] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Jan 21 11:36:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:36:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:47.170Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:36:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:47.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:36:47 np0005590810 nova_compute[251104]: 2026-01-21 16:36:47.364 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:48.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 21 11:36:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:49.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:50 np0005590810 nova_compute[251104]: 2026-01-21 16:36:50.232 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:50.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 21 11:36:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:51.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:52 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:36:52.034 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:36:52 np0005590810 nova_compute[251104]: 2026-01-21 16:36:52.034 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:52 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:36:52.035 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:36:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:52.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:52 np0005590810 nova_compute[251104]: 2026-01-21 16:36:52.366 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 14 KiB/s wr, 2 op/s
Jan 21 11:36:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:53.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:54 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:36:54.037 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:36:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:36:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:36:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:54.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Jan 21 11:36:55 np0005590810 nova_compute[251104]: 2026-01-21 16:36:55.233 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:55] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:36:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:36:55] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:36:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:55.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:36:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:36:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:56.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:36:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 7.3 KiB/s wr, 2 op/s
Jan 21 11:36:56 np0005590810 podman[266023]: 2026-01-21 16:36:56.732640075 +0000 UTC m=+0.097594367 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:36:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:57.171Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:36:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:57.172Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:36:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:36:57.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:36:57 np0005590810 nova_compute[251104]: 2026-01-21 16:36:57.370 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:36:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:36:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:57.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:36:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:36:58.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:36:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Jan 21 11:36:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:36:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:36:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:36:59.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:00 np0005590810 nova_compute[251104]: 2026-01-21 16:37:00.237 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Jan 21 11:37:00 np0005590810 podman[266048]: 2026-01-21 16:37:00.748523819 +0000 UTC m=+0.123435061 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 11:37:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:01.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:02.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:02 np0005590810 nova_compute[251104]: 2026-01-21 16:37:02.374 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.8 MiB/s wr, 147 op/s
Jan 21 11:37:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:03.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:04.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 21 11:37:05 np0005590810 nova_compute[251104]: 2026-01-21 16:37:05.240 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:05] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:37:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:05] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Jan 21 11:37:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:05.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:06.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Jan 21 11:37:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:07.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:37:07 np0005590810 nova_compute[251104]: 2026-01-21 16:37:07.377 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:07.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:08.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Jan 21 11:37:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:37:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:37:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:37:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:37:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:09.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:37:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:37:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:37:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:37:10 np0005590810 nova_compute[251104]: 2026-01-21 16:37:10.242 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:10.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Jan 21 11:37:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:11.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:12.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:12 np0005590810 nova_compute[251104]: 2026-01-21 16:37:12.382 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Jan 21 11:37:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:13.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:14.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 96 op/s
Jan 21 11:37:15 np0005590810 nova_compute[251104]: 2026-01-21 16:37:15.245 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:37:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:15.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:37:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:15] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 21 11:37:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:15] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 21 11:37:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:16.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 251 KiB/s wr, 100 op/s
Jan 21 11:37:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:17.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:37:17 np0005590810 nova_compute[251104]: 2026-01-21 16:37:17.384 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:17.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:18.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 236 KiB/s wr, 5 op/s
Jan 21 11:37:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:19.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:20 np0005590810 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 21 11:37:20 np0005590810 nova_compute[251104]: 2026-01-21 16:37:20.247 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:20.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 236 KiB/s wr, 5 op/s
Jan 21 11:37:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:21.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:37:22.026 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:37:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:37:22.027 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:37:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:37:22.027 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:37:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:22.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:22 np0005590810 nova_compute[251104]: 2026-01-21 16:37:22.389 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 21 11:37:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:23.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:37:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:37:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:24.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:37:25 np0005590810 nova_compute[251104]: 2026-01-21 16:37:25.250 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:25] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 21 11:37:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:25] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 21 11:37:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:25.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:26.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 21 11:37:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:27.175Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:37:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:27.175Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:37:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:27.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:37:27 np0005590810 nova_compute[251104]: 2026-01-21 16:37:27.391 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:27.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:27 np0005590810 podman[266154]: 2026-01-21 16:37:27.69063697 +0000 UTC m=+0.064374230 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:37:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:28.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 1.9 MiB/s wr, 61 op/s
Jan 21 11:37:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:29.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:30 np0005590810 nova_compute[251104]: 2026-01-21 16:37:30.254 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:30.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 1.9 MiB/s wr, 61 op/s
Jan 21 11:37:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:31.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:31 np0005590810 podman[266176]: 2026-01-21 16:37:31.725844704 +0000 UTC m=+0.101648265 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:37:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:32.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:32 np0005590810 nova_compute[251104]: 2026-01-21 16:37:32.394 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 1.9 MiB/s wr, 62 op/s
Jan 21 11:37:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:37:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:33.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:37:34 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:37:34.056 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:37:34 np0005590810 nova_compute[251104]: 2026-01-21 16:37:34.056 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:34 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:37:34.057 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:37:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:34.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 17 KiB/s wr, 1 op/s
Jan 21 11:37:35 np0005590810 nova_compute[251104]: 2026-01-21 16:37:35.256 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:35] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 21 11:37:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:35] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 21 11:37:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:35.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:36 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:37:36.059 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:37:36 np0005590810 nova_compute[251104]: 2026-01-21 16:37:36.364 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:36.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 28 KiB/s wr, 31 op/s
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:37:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:37:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:37.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:37:37 np0005590810 nova_compute[251104]: 2026-01-21 16:37:37.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:37 np0005590810 nova_compute[251104]: 2026-01-21 16:37:37.397 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:37 np0005590810 podman[266382]: 2026-01-21 16:37:37.512953368 +0000 UTC m=+0.054863721 container create e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatterjee, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 11:37:37 np0005590810 systemd[1]: Started libpod-conmon-e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758.scope.
Jan 21 11:37:37 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:37:37 np0005590810 podman[266382]: 2026-01-21 16:37:37.484892894 +0000 UTC m=+0.026803277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:37:37 np0005590810 podman[266382]: 2026-01-21 16:37:37.594084035 +0000 UTC m=+0.135994418 container init e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 21 11:37:37 np0005590810 podman[266382]: 2026-01-21 16:37:37.602783868 +0000 UTC m=+0.144694221 container start e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:37:37 np0005590810 podman[266382]: 2026-01-21 16:37:37.607888 +0000 UTC m=+0.149798353 container attach e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:37:37 np0005590810 gracious_chatterjee[266398]: 167 167
Jan 21 11:37:37 np0005590810 systemd[1]: libpod-e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758.scope: Deactivated successfully.
Jan 21 11:37:37 np0005590810 podman[266382]: 2026-01-21 16:37:37.609858052 +0000 UTC m=+0.151768425 container died e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:37:37 np0005590810 systemd[1]: var-lib-containers-storage-overlay-90f9e92f02466580da370bb7adff3e90c37575cd36d68ab975739276f053159b-merged.mount: Deactivated successfully.
Jan 21 11:37:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:37.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:37 np0005590810 podman[266382]: 2026-01-21 16:37:37.668582262 +0000 UTC m=+0.210492645 container remove e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 21 11:37:37 np0005590810 systemd[1]: libpod-conmon-e5c3beb4cc53d9df32dd996f08b6542e260735e161b4ef5ab8cbf883811d8758.scope: Deactivated successfully.
Jan 21 11:37:37 np0005590810 podman[266421]: 2026-01-21 16:37:37.871771206 +0000 UTC m=+0.065812855 container create fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:37:37 np0005590810 podman[266421]: 2026-01-21 16:37:37.841041087 +0000 UTC m=+0.035082766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:37:37 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:37:37 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:37:37 np0005590810 systemd[1]: Started libpod-conmon-fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0.scope.
Jan 21 11:37:37 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:37:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd859a660481cbf1f0e3ec0b3fed2a2255407dbf629ae024b66d61ee8fb179d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd859a660481cbf1f0e3ec0b3fed2a2255407dbf629ae024b66d61ee8fb179d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd859a660481cbf1f0e3ec0b3fed2a2255407dbf629ae024b66d61ee8fb179d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:37 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd859a660481cbf1f0e3ec0b3fed2a2255407dbf629ae024b66d61ee8fb179d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:38 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd859a660481cbf1f0e3ec0b3fed2a2255407dbf629ae024b66d61ee8fb179d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:38 np0005590810 podman[266421]: 2026-01-21 16:37:38.009524907 +0000 UTC m=+0.203566576 container init fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 21 11:37:38 np0005590810 podman[266421]: 2026-01-21 16:37:38.016542279 +0000 UTC m=+0.210583928 container start fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 21 11:37:38 np0005590810 podman[266421]: 2026-01-21 16:37:38.026838633 +0000 UTC m=+0.220880442 container attach fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:37:38 np0005590810 quizzical_wozniak[266437]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:37:38 np0005590810 quizzical_wozniak[266437]: --> All data devices are unavailable
Jan 21 11:37:38 np0005590810 systemd[1]: libpod-fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0.scope: Deactivated successfully.
Jan 21 11:37:38 np0005590810 podman[266421]: 2026-01-21 16:37:38.391080183 +0000 UTC m=+0.585121842 container died fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wozniak, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:37:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:38.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 30 op/s
Jan 21 11:37:38 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6fd859a660481cbf1f0e3ec0b3fed2a2255407dbf629ae024b66d61ee8fb179d-merged.mount: Deactivated successfully.
Jan 21 11:37:38 np0005590810 podman[266421]: 2026-01-21 16:37:38.810439419 +0000 UTC m=+1.004481068 container remove fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wozniak, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:37:38 np0005590810 systemd[1]: libpod-conmon-fab53ebca274a2873973b6bc81a86850d35a9ad3df9580f13a0d291ea24ff1d0.scope: Deactivated successfully.
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:37:39
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.control', '.rgw.root', '.nfs', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.meta']
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:37:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:37:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:37:39 np0005590810 podman[266558]: 2026-01-21 16:37:39.446172174 +0000 UTC m=+0.062578953 container create 3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tharp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:37:39 np0005590810 systemd[1]: Started libpod-conmon-3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38.scope.
Jan 21 11:37:39 np0005590810 podman[266558]: 2026-01-21 16:37:39.41301863 +0000 UTC m=+0.029425439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:37:39 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:37:39 np0005590810 podman[266558]: 2026-01-21 16:37:39.555080876 +0000 UTC m=+0.171487675 container init 3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tharp, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:37:39 np0005590810 podman[266558]: 2026-01-21 16:37:39.56280321 +0000 UTC m=+0.179209979 container start 3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:37:39 np0005590810 podman[266558]: 2026-01-21 16:37:39.569306865 +0000 UTC m=+0.185713664 container attach 3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tharp, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 11:37:39 np0005590810 sweet_tharp[266574]: 167 167
Jan 21 11:37:39 np0005590810 systemd[1]: libpod-3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38.scope: Deactivated successfully.
Jan 21 11:37:39 np0005590810 podman[266558]: 2026-01-21 16:37:39.572891388 +0000 UTC m=+0.189298207 container died 3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 11:37:39 np0005590810 systemd[1]: var-lib-containers-storage-overlay-77dfd586ab72fc7f9f1386a3d153bf4ae9e8f97f630c4976ba6e966e1e2272a7-merged.mount: Deactivated successfully.
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007618166796900436 of space, bias 1.0, pg target 0.2285450039070131 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:37:39 np0005590810 podman[266558]: 2026-01-21 16:37:39.642780681 +0000 UTC m=+0.259187490 container remove 3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:37:39 np0005590810 systemd[1]: libpod-conmon-3659e609a96fc4d3f2a06c22bd778a63097cde0a8dc2ad6dea34ee9c2b99bf38.scope: Deactivated successfully.
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:37:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:37:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:39.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:39 np0005590810 podman[266596]: 2026-01-21 16:37:39.839266903 +0000 UTC m=+0.055447819 container create 118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_shirley, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Jan 21 11:37:39 np0005590810 systemd[1]: Started libpod-conmon-118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a.scope.
Jan 21 11:37:39 np0005590810 podman[266596]: 2026-01-21 16:37:39.813512701 +0000 UTC m=+0.029693637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:37:39 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:37:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e7a40675f15fc058bc392b2d6fdc5856846ad7269f09ac9d85599df66efb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e7a40675f15fc058bc392b2d6fdc5856846ad7269f09ac9d85599df66efb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e7a40675f15fc058bc392b2d6fdc5856846ad7269f09ac9d85599df66efb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e7a40675f15fc058bc392b2d6fdc5856846ad7269f09ac9d85599df66efb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:39 np0005590810 podman[266596]: 2026-01-21 16:37:39.942026571 +0000 UTC m=+0.158207507 container init 118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:37:39 np0005590810 podman[266596]: 2026-01-21 16:37:39.949216748 +0000 UTC m=+0.165397674 container start 118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:37:39 np0005590810 podman[266596]: 2026-01-21 16:37:39.958055846 +0000 UTC m=+0.174236792 container attach 118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_shirley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]: {
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:    "0": [
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:        {
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "devices": [
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "/dev/loop3"
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            ],
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "lv_name": "ceph_lv0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "lv_size": "21470642176",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "name": "ceph_lv0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "tags": {
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.cluster_name": "ceph",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.crush_device_class": "",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.encrypted": "0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.osd_id": "0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.type": "block",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.vdo": "0",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:                "ceph.with_tpm": "0"
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            },
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "type": "block",
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:            "vg_name": "ceph_vg0"
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:        }
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]:    ]
Jan 21 11:37:40 np0005590810 relaxed_shirley[266612]: }
Jan 21 11:37:40 np0005590810 nova_compute[251104]: 2026-01-21 16:37:40.258 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:40 np0005590810 systemd[1]: libpod-118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a.scope: Deactivated successfully.
Jan 21 11:37:40 np0005590810 conmon[266612]: conmon 118d4132f8d75b051b13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a.scope/container/memory.events
Jan 21 11:37:40 np0005590810 podman[266596]: 2026-01-21 16:37:40.285877518 +0000 UTC m=+0.502058444 container died 118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_shirley, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:37:40 np0005590810 systemd[1]: var-lib-containers-storage-overlay-a58e7a40675f15fc058bc392b2d6fdc5856846ad7269f09ac9d85599df66efb4-merged.mount: Deactivated successfully.
Jan 21 11:37:40 np0005590810 podman[266596]: 2026-01-21 16:37:40.361342876 +0000 UTC m=+0.577523792 container remove 118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 11:37:40 np0005590810 nova_compute[251104]: 2026-01-21 16:37:40.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:40 np0005590810 nova_compute[251104]: 2026-01-21 16:37:40.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:37:40 np0005590810 nova_compute[251104]: 2026-01-21 16:37:40.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:37:40 np0005590810 systemd[1]: libpod-conmon-118d4132f8d75b051b1332b2a7cd924d602d61a7c208ce4d10f3983180d3fb4a.scope: Deactivated successfully.
Jan 21 11:37:40 np0005590810 nova_compute[251104]: 2026-01-21 16:37:40.387 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:37:40 np0005590810 nova_compute[251104]: 2026-01-21 16:37:40.387 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:40.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 30 op/s
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.813543) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013460813668, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 255, "total_data_size": 2058152, "memory_usage": 2093312, "flush_reason": "Manual Compaction"}
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013460834632, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2035445, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26897, "largest_seqno": 28098, "table_properties": {"data_size": 2029818, "index_size": 2958, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11902, "raw_average_key_size": 19, "raw_value_size": 2018475, "raw_average_value_size": 3245, "num_data_blocks": 132, "num_entries": 622, "num_filter_entries": 622, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769013348, "oldest_key_time": 1769013348, "file_creation_time": 1769013460, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21109 microseconds, and 5951 cpu microseconds.
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.834694) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2035445 bytes OK
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.834723) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.839262) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.839318) EVENT_LOG_v1 {"time_micros": 1769013460839306, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.839347) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2052795, prev total WAL file size 2052795, number of live WAL files 2.
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.840337) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(1987KB)], [59(12MB)]
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013460840425, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 15312112, "oldest_snapshot_seqno": -1}
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5896 keys, 15191775 bytes, temperature: kUnknown
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013460955391, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 15191775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15148989, "index_size": 26932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 150532, "raw_average_key_size": 25, "raw_value_size": 15039296, "raw_average_value_size": 2550, "num_data_blocks": 1104, "num_entries": 5896, "num_filter_entries": 5896, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769013460, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.955717) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 15191775 bytes
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.957620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.1 rd, 132.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.7 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 6420, records dropped: 524 output_compression: NoCompression
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.957641) EVENT_LOG_v1 {"time_micros": 1769013460957631, "job": 32, "event": "compaction_finished", "compaction_time_micros": 115058, "compaction_time_cpu_micros": 32062, "output_level": 6, "num_output_files": 1, "total_output_size": 15191775, "num_input_records": 6420, "num_output_records": 5896, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013460958161, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013460960859, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.840172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.960997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.961006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.961008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.961009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:37:40 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:37:40.961011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:37:41 np0005590810 podman[266724]: 2026-01-21 16:37:41.044638091 +0000 UTC m=+0.050163192 container create 2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_euler, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:37:41 np0005590810 systemd[1]: Started libpod-conmon-2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6.scope.
Jan 21 11:37:41 np0005590810 podman[266724]: 2026-01-21 16:37:41.02178079 +0000 UTC m=+0.027305911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:37:41 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:37:41 np0005590810 podman[266724]: 2026-01-21 16:37:41.13819113 +0000 UTC m=+0.143716251 container init 2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 21 11:37:41 np0005590810 podman[266724]: 2026-01-21 16:37:41.145768668 +0000 UTC m=+0.151293769 container start 2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:37:41 np0005590810 podman[266724]: 2026-01-21 16:37:41.151016223 +0000 UTC m=+0.156541324 container attach 2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_euler, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:37:41 np0005590810 hopeful_euler[266740]: 167 167
Jan 21 11:37:41 np0005590810 systemd[1]: libpod-2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6.scope: Deactivated successfully.
Jan 21 11:37:41 np0005590810 podman[266724]: 2026-01-21 16:37:41.155093612 +0000 UTC m=+0.160618713 container died 2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 11:37:41 np0005590810 systemd[1]: var-lib-containers-storage-overlay-ea1e00e2e87b74abb2e5c9f2586f6e8af618a831cd0dd33aa92af295aeda02f2-merged.mount: Deactivated successfully.
Jan 21 11:37:41 np0005590810 podman[266724]: 2026-01-21 16:37:41.211218841 +0000 UTC m=+0.216743942 container remove 2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:37:41 np0005590810 systemd[1]: libpod-conmon-2e129686e6038697f7226d7e99df5f0d20b921ba38429b1d81771abfce6e94c6.scope: Deactivated successfully.
Jan 21 11:37:41 np0005590810 nova_compute[251104]: 2026-01-21 16:37:41.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:41 np0005590810 podman[266767]: 2026-01-21 16:37:41.395148318 +0000 UTC m=+0.048253582 container create 4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wescoff, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:37:41 np0005590810 systemd[1]: Started libpod-conmon-4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27.scope.
Jan 21 11:37:41 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:37:41 np0005590810 podman[266767]: 2026-01-21 16:37:41.373219187 +0000 UTC m=+0.026324471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:37:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e79eb66fb6be28f18312f9304de754225153514fe60cc85ab13cfef576cb7e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e79eb66fb6be28f18312f9304de754225153514fe60cc85ab13cfef576cb7e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e79eb66fb6be28f18312f9304de754225153514fe60cc85ab13cfef576cb7e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:41 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e79eb66fb6be28f18312f9304de754225153514fe60cc85ab13cfef576cb7e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:37:41 np0005590810 podman[266767]: 2026-01-21 16:37:41.484895506 +0000 UTC m=+0.138000790 container init 4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wescoff, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:37:41 np0005590810 podman[266767]: 2026-01-21 16:37:41.492213166 +0000 UTC m=+0.145318430 container start 4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:37:41 np0005590810 podman[266767]: 2026-01-21 16:37:41.498482934 +0000 UTC m=+0.151588318 container attach 4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wescoff, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 21 11:37:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:41.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:42 np0005590810 lvm[266858]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:37:42 np0005590810 lvm[266858]: VG ceph_vg0 finished
Jan 21 11:37:42 np0005590810 sleepy_wescoff[266784]: {}
Jan 21 11:37:42 np0005590810 podman[266767]: 2026-01-21 16:37:42.312279971 +0000 UTC m=+0.965385245 container died 4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:37:42 np0005590810 systemd[1]: libpod-4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27.scope: Deactivated successfully.
Jan 21 11:37:42 np0005590810 systemd[1]: libpod-4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27.scope: Consumed 1.316s CPU time.
Jan 21 11:37:42 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5e79eb66fb6be28f18312f9304de754225153514fe60cc85ab13cfef576cb7e9-merged.mount: Deactivated successfully.
Jan 21 11:37:42 np0005590810 podman[266767]: 2026-01-21 16:37:42.368664378 +0000 UTC m=+1.021769652 container remove 4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:37:42 np0005590810 nova_compute[251104]: 2026-01-21 16:37:42.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:42 np0005590810 systemd[1]: libpod-conmon-4f36a6f45ca2da6b8e847a84d5744deb984c158f18791bd28d4c50a6f1702a27.scope: Deactivated successfully.
Jan 21 11:37:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:42 np0005590810 nova_compute[251104]: 2026-01-21 16:37:42.400 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:42 np0005590810 nova_compute[251104]: 2026-01-21 16:37:42.409 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:37:42 np0005590810 nova_compute[251104]: 2026-01-21 16:37:42.409 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:37:42 np0005590810 nova_compute[251104]: 2026-01-21 16:37:42.410 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:37:42 np0005590810 nova_compute[251104]: 2026-01-21 16:37:42.410 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:37:42 np0005590810 nova_compute[251104]: 2026-01-21 16:37:42.410 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:37:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:37:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:37:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:37:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:37:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 17 KiB/s wr, 45 op/s
Jan 21 11:37:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:37:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041295212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:37:42 np0005590810 nova_compute[251104]: 2026-01-21 16:37:42.922 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.138 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.140 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4561MB free_disk=59.94258117675781GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.140 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.140 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:37:43 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:37:43 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.201 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.202 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.223 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:37:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:37:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:43.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:37:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:37:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199447623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.736 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.742 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.755 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.756 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:37:43 np0005590810 nova_compute[251104]: 2026-01-21 16:37:43.757 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:37:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:44.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 45 op/s
Jan 21 11:37:44 np0005590810 nova_compute[251104]: 2026-01-21 16:37:44.757 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:45 np0005590810 nova_compute[251104]: 2026-01-21 16:37:45.260 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:45 np0005590810 nova_compute[251104]: 2026-01-21 16:37:45.363 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:45 np0005590810 nova_compute[251104]: 2026-01-21 16:37:45.379 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:45 np0005590810 nova_compute[251104]: 2026-01-21 16:37:45.380 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:37:45 np0005590810 nova_compute[251104]: 2026-01-21 16:37:45.380 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:37:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:45] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 21 11:37:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:45] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 21 11:37:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:45.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:46.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 58 op/s
Jan 21 11:37:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:47.179Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:37:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:47.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:37:47 np0005590810 nova_compute[251104]: 2026-01-21 16:37:47.405 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:47.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:48.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:37:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:49.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:50 np0005590810 nova_compute[251104]: 2026-01-21 16:37:50.262 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:50.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:37:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:51.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:52 np0005590810 nova_compute[251104]: 2026-01-21 16:37:52.409 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:37:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:52.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:37:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 21 11:37:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:37:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:37:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:54.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 597 B/s wr, 13 op/s
Jan 21 11:37:55 np0005590810 nova_compute[251104]: 2026-01-21 16:37:55.265 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:55] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:37:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:37:55] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:37:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:37:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 21 11:37:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:37:57.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:37:57 np0005590810 nova_compute[251104]: 2026-01-21 16:37:57.413 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:37:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:57.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:37:58.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:37:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:37:58 np0005590810 podman[266986]: 2026-01-21 16:37:58.696552882 +0000 UTC m=+0.069015057 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:37:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:37:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:37:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:37:59.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:00 np0005590810 nova_compute[251104]: 2026-01-21 16:38:00.267 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:38:00 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:02 np0005590810 nova_compute[251104]: 2026-01-21 16:38:02.418 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:38:02 np0005590810 podman[267010]: 2026-01-21 16:38:02.716334388 +0000 UTC m=+0.094206509 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:38:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:04.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:38:05 np0005590810 nova_compute[251104]: 2026-01-21 16:38:05.271 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:05] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:38:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:05] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:38:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:05.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:38:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:07.181Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:38:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:07.181Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:38:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:07.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:38:07 np0005590810 nova_compute[251104]: 2026-01-21 16:38:07.421 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:07.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:38:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:38:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:38:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:38:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:38:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:38:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:38:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:38:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:38:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:09.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:10 np0005590810 nova_compute[251104]: 2026-01-21 16:38:10.273 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:10.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:38:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:11.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:12 np0005590810 nova_compute[251104]: 2026-01-21 16:38:12.425 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:12.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:38:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:13.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:38:15 np0005590810 nova_compute[251104]: 2026-01-21 16:38:15.276 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:15] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:38:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:15] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:38:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:15.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:16.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:38:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:17.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:38:17 np0005590810 nova_compute[251104]: 2026-01-21 16:38:17.428 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:18.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:38:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.002000064s ======
Jan 21 11:38:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:19.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 21 11:38:20 np0005590810 nova_compute[251104]: 2026-01-21 16:38:20.278 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:20.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:38:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:22.028 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:22.028 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:22.028 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:22 np0005590810 nova_compute[251104]: 2026-01-21 16:38:22.432 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 21 11:38:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:23.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:38:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:38:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 21 11:38:25 np0005590810 nova_compute[251104]: 2026-01-21 16:38:25.280 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:25] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 21 11:38:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:25] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 21 11:38:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:25 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:26.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 21 11:38:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:27.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:38:27 np0005590810 nova_compute[251104]: 2026-01-21 16:38:27.435 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:38:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:28.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:38:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 21 11:38:29 np0005590810 podman[267114]: 2026-01-21 16:38:29.674219803 +0000 UTC m=+0.054686765 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 11:38:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:30 np0005590810 nova_compute[251104]: 2026-01-21 16:38:30.282 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:30.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 21 11:38:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:31.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:32 np0005590810 nova_compute[251104]: 2026-01-21 16:38:32.439 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:32.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 21 11:38:33 np0005590810 podman[267138]: 2026-01-21 16:38:33.718314574 +0000 UTC m=+0.086769165 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 21 11:38:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 91 op/s
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.284 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.542 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.542 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.560 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 21 11:38:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:35] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 21 11:38:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:35] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.638 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.638 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.645 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.646 251108 INFO nova.compute.claims [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 21 11:38:35 np0005590810 nova_compute[251104]: 2026-01-21 16:38:35.755 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:35.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:35 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:38:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440597706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.222 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.230 251108 DEBUG nova.compute.provider_tree [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.248 251108 DEBUG nova.scheduler.client.report [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.273 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.274 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.322 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.323 251108 DEBUG nova.network.neutron [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.348 251108 INFO nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.368 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.379 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.459 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 21 11:38:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:36.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.460 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.461 251108 INFO nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Creating image(s)#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.492 251108 DEBUG nova.storage.rbd_utils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.527 251108 DEBUG nova.storage.rbd_utils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:38:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 91 op/s
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.560 251108 DEBUG nova.storage.rbd_utils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.564 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.623 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.625 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.626 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.627 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.658 251108 DEBUG nova.storage.rbd_utils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.663 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:36 np0005590810 nova_compute[251104]: 2026-01-21 16:38:36.834 251108 DEBUG nova.policy [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '918cf3fb78394ce8b3ade91a1ad699fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d6214185b004f9c9798abfc29d1ae14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 21 11:38:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:37.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:38:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:37.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.352 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.438 251108 DEBUG nova.storage.rbd_utils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] resizing rbd image b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.485 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.609 251108 DEBUG nova.objects.instance [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'migration_context' on Instance uuid b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.631 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.631 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Ensure instance console log exists: /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.632 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.632 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:37 np0005590810 nova_compute[251104]: 2026-01-21 16:38:37.632 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:37.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:38 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:38.088 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.088 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:38 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:38.089 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:38.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.655 251108 DEBUG nova.network.neutron [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Successfully updated port: 7f780d95-7b41-45c3-ab41-4c82414a5aab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.674 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "refresh_cache-b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.674 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquired lock "refresh_cache-b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.674 251108 DEBUG nova.network.neutron [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.737 251108 DEBUG nova.compute.manager [req-9b6db499-1427-4772-a481-9265e39df5ba req-8d7dcf61-21e1-4ac1-b459-172260314104 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-changed-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.737 251108 DEBUG nova.compute.manager [req-9b6db499-1427-4772-a481-9265e39df5ba req-8d7dcf61-21e1-4ac1-b459-172260314104 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Refreshing instance network info cache due to event network-changed-7f780d95-7b41-45c3-ab41-4c82414a5aab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.738 251108 DEBUG oslo_concurrency.lockutils [req-9b6db499-1427-4772-a481-9265e39df5ba req-8d7dcf61-21e1-4ac1-b459-172260314104 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "refresh_cache-b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:38:38 np0005590810 nova_compute[251104]: 2026-01-21 16:38:38.796 251108 DEBUG nova.network.neutron [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 21 11:38:39 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:39.092 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:38:39
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.log', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control']
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:38:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:38:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:38:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:38:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:39.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.275 251108 DEBUG nova.network.neutron [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Updating instance_info_cache with network_info: [{"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.285 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.296 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Releasing lock "refresh_cache-b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.296 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Instance network_info: |[{"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.297 251108 DEBUG oslo_concurrency.lockutils [req-9b6db499-1427-4772-a481-9265e39df5ba req-8d7dcf61-21e1-4ac1-b459-172260314104 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquired lock "refresh_cache-b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.297 251108 DEBUG nova.network.neutron [req-9b6db499-1427-4772-a481-9265e39df5ba req-8d7dcf61-21e1-4ac1-b459-172260314104 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Refreshing network info cache for port 7f780d95-7b41-45c3-ab41-4c82414a5aab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.300 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Start _get_guest_xml network_info=[{"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-21T16:29:46Z,direct_url=<?>,disk_format='qcow2',id=437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ad455439fcc6470fa721af543ff96c56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-21T16:29:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'guest_format': None, 'size': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_format': None, 'image_id': '437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.304 251108 WARNING nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.308 251108 DEBUG nova.virt.libvirt.host [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.309 251108 DEBUG nova.virt.libvirt.host [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.315 251108 DEBUG nova.virt.libvirt.host [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.316 251108 DEBUG nova.virt.libvirt.host [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.316 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.316 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-21T16:29:45Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1e6b96db-db66-4485-bb89-2da0df7b45b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-21T16:29:46Z,direct_url=<?>,disk_format='qcow2',id=437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ad455439fcc6470fa721af543ff96c56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-21T16:29:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.317 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.317 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.317 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.318 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.318 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.318 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.319 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.319 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.319 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.319 251108 DEBUG nova.virt.hardware [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.322 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.392 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.392 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.393 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:40.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:38:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 11:38:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802521240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.809 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.838 251108 DEBUG nova.storage.rbd_utils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:38:40 np0005590810 nova_compute[251104]: 2026-01-21 16:38:40.843 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 11:38:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261932278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.342 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.344 251108 DEBUG nova.virt.libvirt.vif [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-21T16:38:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-730706180',display_name='tempest-TestNetworkBasicOps-server-730706180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-730706180',id=9,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE40H/oSSt2fDlJte3oY71NnnI3Isi4Z6pVSxzkKTWeadt6Haz8+SnEa6J8pk+uOJtpduvGYnZyOBSogC1GZkBlmtI9u6m/g29oFU3yoMuoy7rLLGeIO/9jqqhWXbEovg==',key_name='tempest-TestNetworkBasicOps-1617053112',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-x92qnynd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-21T16:38:36Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.344 251108 DEBUG nova.network.os_vif_util [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.345 251108 DEBUG nova.network.os_vif_util [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:05:a3,bridge_name='br-int',has_traffic_filtering=True,id=7f780d95-7b41-45c3-ab41-4c82414a5aab,network=Network(18ec68fc-c1ec-4eaf-93b9-386e7b0477a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7f780d95-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.346 251108 DEBUG nova.objects.instance [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'pci_devices' on Instance uuid b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.362 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] End _get_guest_xml xml=<domain type="kvm">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <uuid>b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c</uuid>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <name>instance-00000009</name>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <memory>131072</memory>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <vcpu>1</vcpu>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <metadata>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <nova:name>tempest-TestNetworkBasicOps-server-730706180</nova:name>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <nova:creationTime>2026-01-21 16:38:40</nova:creationTime>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <nova:flavor name="m1.nano">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <nova:memory>128</nova:memory>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <nova:disk>1</nova:disk>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <nova:swap>0</nova:swap>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <nova:ephemeral>0</nova:ephemeral>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <nova:vcpus>1</nova:vcpus>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      </nova:flavor>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <nova:owner>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <nova:user uuid="918cf3fb78394ce8b3ade91a1ad699fc">tempest-TestNetworkBasicOps-1793517209-project-member</nova:user>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <nova:project uuid="3d6214185b004f9c9798abfc29d1ae14">tempest-TestNetworkBasicOps-1793517209</nova:project>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      </nova:owner>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <nova:root type="image" uuid="437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <nova:ports>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <nova:port uuid="7f780d95-7b41-45c3-ab41-4c82414a5aab">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        </nova:port>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      </nova:ports>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </nova:instance>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  </metadata>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <sysinfo type="smbios">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <system>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <entry name="manufacturer">RDO</entry>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <entry name="product">OpenStack Compute</entry>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <entry name="serial">b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c</entry>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <entry name="uuid">b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c</entry>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <entry name="family">Virtual Machine</entry>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </system>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  </sysinfo>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <os>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <boot dev="hd"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <smbios mode="sysinfo"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  </os>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <features>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <acpi/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <apic/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <vmcoreinfo/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  </features>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <clock offset="utc">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <timer name="pit" tickpolicy="delay"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <timer name="hpet" present="no"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  </clock>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <cpu mode="host-model" match="exact">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <topology sockets="1" cores="1" threads="1"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  </cpu>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  <devices>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <disk type="network" device="disk">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <driver type="raw" cache="none"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <source protocol="rbd" name="vms/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <host name="192.168.122.100" port="6789"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <host name="192.168.122.102" port="6789"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <host name="192.168.122.101" port="6789"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      </source>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <auth username="openstack">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <secret type="ceph" uuid="d9745984-fea8-5195-8ec5-61f685b5c785"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      </auth>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <target dev="vda" bus="virtio"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <disk type="network" device="cdrom">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <driver type="raw" cache="none"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <source protocol="rbd" name="vms/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk.config">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <host name="192.168.122.100" port="6789"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <host name="192.168.122.102" port="6789"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <host name="192.168.122.101" port="6789"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      </source>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <auth username="openstack">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:        <secret type="ceph" uuid="d9745984-fea8-5195-8ec5-61f685b5c785"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      </auth>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <target dev="sda" bus="sata"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <interface type="ethernet">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <mac address="fa:16:3e:0c:05:a3"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <model type="virtio"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <driver name="vhost" rx_queue_size="512"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <mtu size="1442"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <target dev="tap7f780d95-7b"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </interface>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <serial type="pty">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <log file="/var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c/console.log" append="off"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </serial>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <video>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <model type="virtio"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </video>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <input type="tablet" bus="usb"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <rng model="virtio">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <backend model="random">/dev/urandom</backend>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </rng>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <controller type="usb" index="0"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    <memballoon model="virtio">
Jan 21 11:38:41 np0005590810 nova_compute[251104]:      <stats period="10"/>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:    </memballoon>
Jan 21 11:38:41 np0005590810 nova_compute[251104]:  </devices>
Jan 21 11:38:41 np0005590810 nova_compute[251104]: </domain>
Jan 21 11:38:41 np0005590810 nova_compute[251104]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.364 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Preparing to wait for external event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.364 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.364 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.365 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.365 251108 DEBUG nova.virt.libvirt.vif [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-21T16:38:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-730706180',display_name='tempest-TestNetworkBasicOps-server-730706180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-730706180',id=9,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE40H/oSSt2fDlJte3oY71NnnI3Isi4Z6pVSxzkKTWeadt6Haz8+SnEa6J8pk+uOJtpduvGYnZyOBSogC1GZkBlmtI9u6m/g29oFU3yoMuoy7rLLGeIO/9jqqhWXbEovg==',key_name='tempest-TestNetworkBasicOps-1617053112',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-x92qnynd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-21T16:38:36Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.366 251108 DEBUG nova.network.os_vif_util [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.366 251108 DEBUG nova.network.os_vif_util [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:05:a3,bridge_name='br-int',has_traffic_filtering=True,id=7f780d95-7b41-45c3-ab41-4c82414a5aab,network=Network(18ec68fc-c1ec-4eaf-93b9-386e7b0477a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7f780d95-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.367 251108 DEBUG os_vif [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:05:a3,bridge_name='br-int',has_traffic_filtering=True,id=7f780d95-7b41-45c3-ab41-4c82414a5aab,network=Network(18ec68fc-c1ec-4eaf-93b9-386e7b0477a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7f780d95-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.367 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.368 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.368 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.372 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.372 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f780d95-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.372 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f780d95-7b, col_values=(('external_ids', {'iface-id': '7f780d95-7b41-45c3-ab41-4c82414a5aab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:05:a3', 'vm-uuid': 'b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.374 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:41 np0005590810 NetworkManager[48894]: <info>  [1769013521.3750] manager: (tap7f780d95-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.377 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.383 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.384 251108 INFO os_vif [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:05:a3,bridge_name='br-int',has_traffic_filtering=True,id=7f780d95-7b41-45c3-ab41-4c82414a5aab,network=Network(18ec68fc-c1ec-4eaf-93b9-386e7b0477a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7f780d95-7b')#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.434 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.435 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.435 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No VIF found with MAC fa:16:3e:0c:05:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.436 251108 INFO nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Using config drive#033[00m
Jan 21 11:38:41 np0005590810 nova_compute[251104]: 2026-01-21 16:38:41.461 251108 DEBUG nova.storage.rbd_utils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:38:41 np0005590810 ceph-mgr[74671]: [devicehealth INFO root] Check health
Jan 21 11:38:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:41.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.233 251108 INFO nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Creating config drive at /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c/disk.config#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.239 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmbu58kd3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.369 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmbu58kd3" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.406 251108 DEBUG nova.storage.rbd_utils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.411 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c/disk.config b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.431 251108 DEBUG nova.network.neutron [req-9b6db499-1427-4772-a481-9265e39df5ba req-8d7dcf61-21e1-4ac1-b459-172260314104 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Updated VIF entry in instance network info cache for port 7f780d95-7b41-45c3-ab41-4c82414a5aab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.433 251108 DEBUG nova.network.neutron [req-9b6db499-1427-4772-a481-9265e39df5ba req-8d7dcf61-21e1-4ac1-b459-172260314104 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Updating instance_info_cache with network_info: [{"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:38:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:42.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.467 251108 DEBUG oslo_concurrency.lockutils [req-9b6db499-1427-4772-a481-9265e39df5ba req-8d7dcf61-21e1-4ac1-b459-172260314104 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Releasing lock "refresh_cache-b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:38:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.625 251108 DEBUG oslo_concurrency.processutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c/disk.config b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.625 251108 INFO nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Deleting local config drive /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c/disk.config because it was imported into RBD.#033[00m
Jan 21 11:38:42 np0005590810 systemd[1]: Starting libvirt secret daemon...
Jan 21 11:38:42 np0005590810 systemd[1]: Started libvirt secret daemon.
Jan 21 11:38:42 np0005590810 kernel: tap7f780d95-7b: entered promiscuous mode
Jan 21 11:38:42 np0005590810 NetworkManager[48894]: <info>  [1769013522.7338] manager: (tap7f780d95-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 21 11:38:42 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:42Z|00058|binding|INFO|Claiming lport 7f780d95-7b41-45c3-ab41-4c82414a5aab for this chassis.
Jan 21 11:38:42 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:42Z|00059|binding|INFO|7f780d95-7b41-45c3-ab41-4c82414a5aab: Claiming fa:16:3e:0c:05:a3 10.100.0.13
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.734 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.745 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.747 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:42 np0005590810 NetworkManager[48894]: <info>  [1769013522.7498] manager: (patch-provnet-b53c687f-ce80-4374-bb32-b17e6ca8f621-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 21 11:38:42 np0005590810 NetworkManager[48894]: <info>  [1769013522.7509] manager: (patch-br-int-to-provnet-b53c687f-ce80-4374-bb32-b17e6ca8f621): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.752 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:05:a3 10.100.0.13'], port_security=['fa:16:3e:0c:05:a3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-434942515', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-434942515', 'neutron:project_id': '3d6214185b004f9c9798abfc29d1ae14', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7ab99ce4-4855-4af8-8e67-f44b92f51ea9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=341c28d3-5b2f-4edd-b87c-07ccd7cb06ef, chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], logical_port=7f780d95-7b41-45c3-ab41-4c82414a5aab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.753 163593 INFO neutron.agent.ovn.metadata.agent [-] Port 7f780d95-7b41-45c3-ab41-4c82414a5aab in datapath 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 bound to our chassis#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.754 163593 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.767 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[1a875f35-93c9-486c-8dc2-52e0a359a9fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.768 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18ec68fc-c1 in ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 21 11:38:42 np0005590810 systemd-udevd[267515]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.770 260432 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18ec68fc-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.770 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9891af-79cf-4e22-ba1c-8de0047228d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.771 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[354fb280-35b6-425b-b1a9-7f2ef1db78cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 systemd-machined[217254]: New machine qemu-3-instance-00000009.
Jan 21 11:38:42 np0005590810 NetworkManager[48894]: <info>  [1769013522.7845] device (tap7f780d95-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 11:38:42 np0005590810 NetworkManager[48894]: <info>  [1769013522.7851] device (tap7f780d95-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.788 163844 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c2326f-7250-402c-a1da-a470393575ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 systemd[1]: Started Virtual Machine qemu-3-instance-00000009.
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.816 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[83a378d0-c54b-4f42-a7c6-688b7126d8d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.838 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.844 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.852 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[4975b0d4-786b-4323-acf8-be56c61cd892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:42Z|00060|binding|INFO|Setting lport 7f780d95-7b41-45c3-ab41-4c82414a5aab ovn-installed in OVS
Jan 21 11:38:42 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:42Z|00061|binding|INFO|Setting lport 7f780d95-7b41-45c3-ab41-4c82414a5aab up in Southbound
Jan 21 11:38:42 np0005590810 NetworkManager[48894]: <info>  [1769013522.8614] manager: (tap18ec68fc-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.860 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[5e34a839-aa9d-4b4a-981f-d5975d461340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 nova_compute[251104]: 2026-01-21 16:38:42.862 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.901 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[5988b517-a3dd-4035-bda6-f541129ba32a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.904 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[82d68d6d-9faf-4b8a-b630-a8d3dac1117a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 NetworkManager[48894]: <info>  [1769013522.9283] device (tap18ec68fc-c0): carrier: link connected
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.932 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[b1674f4a-7222-4e4c-93ee-836804a0301d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.952 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[2abb58a5-e429-4e08-9cbd-fe2b50b36164]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ec68fc-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:6c:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473960, 'reachable_time': 30301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267597, 'error': None, 'target': 'ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.965 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[4de284ef-1701-4f6d-a941-bbdacc718228]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:6c02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473960, 'tstamp': 473960}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267599, 'error': None, 'target': 'ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:42.982 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[4382d33b-0c88-494b-b737-59b3d55046e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ec68fc-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:6c:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473960, 'reachable_time': 30301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267600, 'error': None, 'target': 'ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.017 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[761c6a3c-7bc3-4d2c-96cf-39bcc443c046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.080 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[76248a8d-3565-470d-8ffe-70fb9f650401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.082 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ec68fc-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.082 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.083 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ec68fc-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.085 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:43 np0005590810 NetworkManager[48894]: <info>  [1769013523.0858] manager: (tap18ec68fc-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 21 11:38:43 np0005590810 kernel: tap18ec68fc-c0: entered promiscuous mode
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.087 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.093 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ec68fc-c0, col_values=(('external_ids', {'iface-id': 'b8e55541-dbba-4253-988d-19c4b690c151'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:43 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:43Z|00062|binding|INFO|Releasing lport b8e55541-dbba-4253-988d-19c4b690c151 from this chassis (sb_readonly=0)
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.095 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.110 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.110 163593 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ec68fc-c1ec-4eaf-93b9-386e7b0477a2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ec68fc-c1ec-4eaf-93b9-386e7b0477a2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.112 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[91ac4733-1b8d-45ff-834b-1f2c925ea6f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.113 163593 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: global
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    log         /dev/log local0 debug
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    log-tag     haproxy-metadata-proxy-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    user        root
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    group       root
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    maxconn     1024
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    pidfile     /var/lib/neutron/external/pids/18ec68fc-c1ec-4eaf-93b9-386e7b0477a2.pid.haproxy
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    daemon
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: defaults
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    log global
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    mode http
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    option httplog
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    option dontlognull
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    option http-server-close
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    option forwardfor
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    retries                 3
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    timeout http-request    30s
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    timeout connect         30s
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    timeout client          32s
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    timeout server          32s
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    timeout http-keep-alive 30s
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: listen listener
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    bind 169.254.169.254:80
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    server metadata /var/lib/neutron/metadata_proxy
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]:    http-request add-header X-OVN-Network-ID 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 21 11:38:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:43.114 163593 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'env', 'PROCESS_TAG=haproxy-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18ec68fc-c1ec-4eaf-93b9-386e7b0477a2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.155 251108 DEBUG nova.compute.manager [req-f305a82b-188b-4dfa-8795-5cadb4f59f9d req-f0656de0-8b7e-4874-bdc5-91bc250da9dc 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.155 251108 DEBUG oslo_concurrency.lockutils [req-f305a82b-188b-4dfa-8795-5cadb4f59f9d req-f0656de0-8b7e-4874-bdc5-91bc250da9dc 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.155 251108 DEBUG oslo_concurrency.lockutils [req-f305a82b-188b-4dfa-8795-5cadb4f59f9d req-f0656de0-8b7e-4874-bdc5-91bc250da9dc 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.156 251108 DEBUG oslo_concurrency.lockutils [req-f305a82b-188b-4dfa-8795-5cadb4f59f9d req-f0656de0-8b7e-4874-bdc5-91bc250da9dc 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.156 251108 DEBUG nova.compute.manager [req-f305a82b-188b-4dfa-8795-5cadb4f59f9d req-f0656de0-8b7e-4874-bdc5-91bc250da9dc 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Processing event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.173 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013523.1734455, b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.174 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] VM Started (Lifecycle Event)#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.176 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.180 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.184 251108 INFO nova.virt.libvirt.driver [-] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Instance spawned successfully.#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.184 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.200 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.207 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.212 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.212 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.213 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.213 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.214 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.214 251108 DEBUG nova.virt.libvirt.driver [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.233 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.234 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013523.173645, b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.234 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] VM Paused (Lifecycle Event)#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.256 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.261 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013523.179764, b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.262 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] VM Resumed (Lifecycle Event)#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.301 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.305 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.313 251108 INFO nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Took 6.85 seconds to spawn the instance on the hypervisor.#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.314 251108 DEBUG nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.357 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.392 251108 INFO nova.compute.manager [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Took 7.79 seconds to build instance.#033[00m
Jan 21 11:38:43 np0005590810 nova_compute[251104]: 2026-01-21 16:38:43.417 251108 DEBUG oslo_concurrency.lockutils [None req-222175ba-98b9-4831-af9c-9e4a123083e7 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:43 np0005590810 podman[267696]: 2026-01-21 16:38:43.54191755 +0000 UTC m=+0.063159311 container create 2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:38:43 np0005590810 systemd[1]: Started libpod-conmon-2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791.scope.
Jan 21 11:38:43 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:38:43 np0005590810 podman[267696]: 2026-01-21 16:38:43.512213004 +0000 UTC m=+0.033454785 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 11:38:43 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db559ddad619e663c7ef1aec2ea019cdc88081934157a6039068bd08ff0a8a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:43 np0005590810 podman[267696]: 2026-01-21 16:38:43.628637423 +0000 UTC m=+0.149879214 container init 2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 21 11:38:43 np0005590810 podman[267696]: 2026-01-21 16:38:43.634636492 +0000 UTC m=+0.155878253 container start 2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:38:43 np0005590810 neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2[267718]: [NOTICE]   (267723) : New worker (267725) forked
Jan 21 11:38:43 np0005590810 neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2[267718]: [NOTICE]   (267723) : Loading success.
Jan 21 11:38:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:43.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.367 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.391 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.392 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.392 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.392 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.392 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:44.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:38:44 np0005590810 podman[267859]: 2026-01-21 16:38:44.607919386 +0000 UTC m=+0.079041782 container create 1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Jan 21 11:38:44 np0005590810 podman[267859]: 2026-01-21 16:38:44.551174058 +0000 UTC m=+0.022296504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:38:44 np0005590810 systemd[1]: Started libpod-conmon-1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58.scope.
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:38:44 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:38:44 np0005590810 podman[267859]: 2026-01-21 16:38:44.785856174 +0000 UTC m=+0.256978570 container init 1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:38:44 np0005590810 podman[267859]: 2026-01-21 16:38:44.795102105 +0000 UTC m=+0.266224501 container start 1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:38:44 np0005590810 podman[267859]: 2026-01-21 16:38:44.799216555 +0000 UTC m=+0.270338951 container attach 1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:38:44 np0005590810 exciting_mayer[267884]: 167 167
Jan 21 11:38:44 np0005590810 systemd[1]: libpod-1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58.scope: Deactivated successfully.
Jan 21 11:38:44 np0005590810 conmon[267884]: conmon 1c61d0cd7fccfc5d2524 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58.scope/container/memory.events
Jan 21 11:38:44 np0005590810 podman[267859]: 2026-01-21 16:38:44.803277483 +0000 UTC m=+0.274399879 container died 1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:38:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502405408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.882 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.961 251108 DEBUG nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 21 11:38:44 np0005590810 nova_compute[251104]: 2026-01-21 16:38:44.961 251108 DEBUG nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 21 11:38:44 np0005590810 systemd[1]: var-lib-containers-storage-overlay-b7d44463656ebd53396d2d5a7bf64a48ab0f9f5eb57432bb7a57c18944098e8c-merged.mount: Deactivated successfully.
Jan 21 11:38:45 np0005590810 podman[267859]: 2026-01-21 16:38:44.999898219 +0000 UTC m=+0.471020615 container remove 1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:38:45 np0005590810 systemd[1]: libpod-conmon-1c61d0cd7fccfc5d25246b34de4bbf8aa16af3bedcde11e83e781bb24dfa5b58.scope: Deactivated successfully.
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.184 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.185 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4394MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.185 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.185 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.241 251108 DEBUG nova.compute.manager [req-7ccee9b1-b4f6-4663-9632-15ae0118a5d0 req-89415ba2-e574-4b3d-a38c-c0a38299268a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.241 251108 DEBUG oslo_concurrency.lockutils [req-7ccee9b1-b4f6-4663-9632-15ae0118a5d0 req-89415ba2-e574-4b3d-a38c-c0a38299268a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.242 251108 DEBUG oslo_concurrency.lockutils [req-7ccee9b1-b4f6-4663-9632-15ae0118a5d0 req-89415ba2-e574-4b3d-a38c-c0a38299268a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.242 251108 DEBUG oslo_concurrency.lockutils [req-7ccee9b1-b4f6-4663-9632-15ae0118a5d0 req-89415ba2-e574-4b3d-a38c-c0a38299268a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.242 251108 DEBUG nova.compute.manager [req-7ccee9b1-b4f6-4663-9632-15ae0118a5d0 req-89415ba2-e574-4b3d-a38c-c0a38299268a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] No waiting events found dispatching network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.242 251108 WARNING nova.compute.manager [req-7ccee9b1-b4f6-4663-9632-15ae0118a5d0 req-89415ba2-e574-4b3d-a38c-c0a38299268a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received unexpected event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab for instance with vm_state active and task_state None.#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.270 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Instance b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.271 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.271 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:38:45 np0005590810 podman[267913]: 2026-01-21 16:38:45.192788898 +0000 UTC m=+0.028189599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.287 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.290 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Refreshing inventories for resource provider 2519faba-4002-49a2-b483-5098e748d2b5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.310 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Updating ProviderTree inventory for provider 2519faba-4002-49a2-b483-5098e748d2b5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.311 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Updating inventory in ProviderTree for provider 2519faba-4002-49a2-b483-5098e748d2b5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.325 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Refreshing aggregate associations for resource provider 2519faba-4002-49a2-b483-5098e748d2b5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.346 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Refreshing trait associations for resource provider 2519faba-4002-49a2-b483-5098e748d2b5, traits: COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.379 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:38:45 np0005590810 podman[267913]: 2026-01-21 16:38:45.389461596 +0000 UTC m=+0.224862277 container create 6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galileo, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:38:45 np0005590810 systemd[1]: Started libpod-conmon-6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c.scope.
Jan 21 11:38:45 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:38:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489d6b7e0b7d65480ef4e1c8c487082fb3f1bf76bfbf17fa8a608a4e09f98480/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489d6b7e0b7d65480ef4e1c8c487082fb3f1bf76bfbf17fa8a608a4e09f98480/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489d6b7e0b7d65480ef4e1c8c487082fb3f1bf76bfbf17fa8a608a4e09f98480/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489d6b7e0b7d65480ef4e1c8c487082fb3f1bf76bfbf17fa8a608a4e09f98480/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489d6b7e0b7d65480ef4e1c8c487082fb3f1bf76bfbf17fa8a608a4e09f98480/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:45 np0005590810 podman[267913]: 2026-01-21 16:38:45.50884543 +0000 UTC m=+0.344246131 container init 6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:38:45 np0005590810 podman[267913]: 2026-01-21 16:38:45.516622664 +0000 UTC m=+0.352023345 container start 6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galileo, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:38:45 np0005590810 podman[267913]: 2026-01-21 16:38:45.52281241 +0000 UTC m=+0.358213091 container attach 6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galileo, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:38:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:45] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:38:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:45] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.699 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.699 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.700 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.700 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.700 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.701 251108 INFO nova.compute.manager [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Terminating instance#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.702 251108 DEBUG nova.compute.manager [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 21 11:38:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:45.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:45 np0005590810 cranky_galileo[267931]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:38:45 np0005590810 cranky_galileo[267931]: --> All data devices are unavailable
Jan 21 11:38:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:38:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945997894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:38:45 np0005590810 systemd[1]: libpod-6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c.scope: Deactivated successfully.
Jan 21 11:38:45 np0005590810 podman[267913]: 2026-01-21 16:38:45.900397959 +0000 UTC m=+0.735798640 container died 6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.903 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.911 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.926 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.946 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.947 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:45 np0005590810 kernel: tap7f780d95-7b (unregistering): left promiscuous mode
Jan 21 11:38:45 np0005590810 NetworkManager[48894]: <info>  [1769013525.9550] device (tap7f780d95-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 21 11:38:45 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:45Z|00063|binding|INFO|Releasing lport 7f780d95-7b41-45c3-ab41-4c82414a5aab from this chassis (sb_readonly=0)
Jan 21 11:38:45 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:45Z|00064|binding|INFO|Setting lport 7f780d95-7b41-45c3-ab41-4c82414a5aab down in Southbound
Jan 21 11:38:45 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:45Z|00065|binding|INFO|Removing iface tap7f780d95-7b ovn-installed in OVS
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.966 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:45 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:45.975 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:05:a3 10.100.0.13'], port_security=['fa:16:3e:0c:05:a3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-434942515', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-434942515', 'neutron:project_id': '3d6214185b004f9c9798abfc29d1ae14', 'neutron:revision_number': '9', 'neutron:security_group_ids': '7ab99ce4-4855-4af8-8e67-f44b92f51ea9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=341c28d3-5b2f-4edd-b87c-07ccd7cb06ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], logical_port=7f780d95-7b41-45c3-ab41-4c82414a5aab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:38:45 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:45.977 163593 INFO neutron.agent.ovn.metadata.agent [-] Port 7f780d95-7b41-45c3-ab41-4c82414a5aab in datapath 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 unbound from our chassis#033[00m
Jan 21 11:38:45 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:45.978 163593 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 21 11:38:45 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:45.979 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb3840c-ddee-482c-aa72-84c8016a8d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:45 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:45.981 163593 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 namespace which is not needed anymore#033[00m
Jan 21 11:38:45 np0005590810 nova_compute[251104]: 2026-01-21 16:38:45.984 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 21 11:38:46 np0005590810 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Consumed 2.905s CPU time.
Jan 21 11:38:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay-489d6b7e0b7d65480ef4e1c8c487082fb3f1bf76bfbf17fa8a608a4e09f98480-merged.mount: Deactivated successfully.
Jan 21 11:38:46 np0005590810 systemd-machined[217254]: Machine qemu-3-instance-00000009 terminated.
Jan 21 11:38:46 np0005590810 podman[267913]: 2026-01-21 16:38:46.057680526 +0000 UTC m=+0.893081207 container remove 6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_galileo, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:38:46 np0005590810 systemd[1]: libpod-conmon-6a9d24ba26ea0d197f7cec173f24930c36733f72903f5694ca7a71137576e73c.scope: Deactivated successfully.
Jan 21 11:38:46 np0005590810 kernel: tap7f780d95-7b: entered promiscuous mode
Jan 21 11:38:46 np0005590810 NetworkManager[48894]: <info>  [1769013526.1230] manager: (tap7f780d95-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Jan 21 11:38:46 np0005590810 kernel: tap7f780d95-7b (unregistering): left promiscuous mode
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00066|binding|INFO|Claiming lport 7f780d95-7b41-45c3-ab41-4c82414a5aab for this chassis.
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.128 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00067|binding|INFO|7f780d95-7b41-45c3-ab41-4c82414a5aab: Claiming fa:16:3e:0c:05:a3 10.100.0.13
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.137 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:05:a3 10.100.0.13'], port_security=['fa:16:3e:0c:05:a3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-434942515', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-434942515', 'neutron:project_id': '3d6214185b004f9c9798abfc29d1ae14', 'neutron:revision_number': '9', 'neutron:security_group_ids': '7ab99ce4-4855-4af8-8e67-f44b92f51ea9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=341c28d3-5b2f-4edd-b87c-07ccd7cb06ef, chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], logical_port=7f780d95-7b41-45c3-ab41-4c82414a5aab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:38:46 np0005590810 neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2[267718]: [NOTICE]   (267723) : haproxy version is 2.8.14-c23fe91
Jan 21 11:38:46 np0005590810 neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2[267718]: [NOTICE]   (267723) : path to executable is /usr/sbin/haproxy
Jan 21 11:38:46 np0005590810 neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2[267718]: [WARNING]  (267723) : Exiting Master process...
Jan 21 11:38:46 np0005590810 neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2[267718]: [WARNING]  (267723) : Exiting Master process...
Jan 21 11:38:46 np0005590810 neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2[267718]: [ALERT]    (267723) : Current worker (267725) exited with code 143 (Terminated)
Jan 21 11:38:46 np0005590810 neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2[267718]: [WARNING]  (267723) : All workers exited. Exiting... (0)
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.151 251108 INFO nova.virt.libvirt.driver [-] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Instance destroyed successfully.#033[00m
Jan 21 11:38:46 np0005590810 systemd[1]: libpod-2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791.scope: Deactivated successfully.
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.153 251108 DEBUG nova.objects.instance [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'resources' on Instance uuid b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00068|binding|INFO|Setting lport 7f780d95-7b41-45c3-ab41-4c82414a5aab ovn-installed in OVS
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00069|binding|INFO|Setting lport 7f780d95-7b41-45c3-ab41-4c82414a5aab up in Southbound
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00070|binding|INFO|Releasing lport 7f780d95-7b41-45c3-ab41-4c82414a5aab from this chassis (sb_readonly=1)
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.155 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00071|if_status|INFO|Not setting lport 7f780d95-7b41-45c3-ab41-4c82414a5aab down as sb is readonly
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00072|binding|INFO|Removing iface tap7f780d95-7b ovn-installed in OVS
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.157 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 podman[268003]: 2026-01-21 16:38:46.158200924 +0000 UTC m=+0.056400078 container died 2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00073|binding|INFO|Releasing lport 7f780d95-7b41-45c3-ab41-4c82414a5aab from this chassis (sb_readonly=0)
Jan 21 11:38:46 np0005590810 ovn_controller[152632]: 2026-01-21T16:38:46Z|00074|binding|INFO|Setting lport 7f780d95-7b41-45c3-ab41-4c82414a5aab down in Southbound
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.170 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.187 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:05:a3 10.100.0.13'], port_security=['fa:16:3e:0c:05:a3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-434942515', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-434942515', 'neutron:project_id': '3d6214185b004f9c9798abfc29d1ae14', 'neutron:revision_number': '9', 'neutron:security_group_ids': '7ab99ce4-4855-4af8-8e67-f44b92f51ea9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=341c28d3-5b2f-4edd-b87c-07ccd7cb06ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], logical_port=7f780d95-7b41-45c3-ab41-4c82414a5aab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.194 251108 DEBUG nova.virt.libvirt.vif [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-21T16:38:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-730706180',display_name='tempest-TestNetworkBasicOps-server-730706180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-730706180',id=9,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE40H/oSSt2fDlJte3oY71NnnI3Isi4Z6pVSxzkKTWeadt6Haz8+SnEa6J8pk+uOJtpduvGYnZyOBSogC1GZkBlmtI9u6m/g29oFU3yoMuoy7rLLGeIO/9jqqhWXbEovg==',key_name='tempest-TestNetworkBasicOps-1617053112',keypairs=<?>,launch_index=0,launched_at=2026-01-21T16:38:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-x92qnynd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-21T16:38:43Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.194 251108 DEBUG nova.network.os_vif_util [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "address": "fa:16:3e:0c:05:a3", "network": {"id": "18ec68fc-c1ec-4eaf-93b9-386e7b0477a2", "bridge": "br-int", "label": "tempest-network-smoke--461926091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f780d95-7b", "ovs_interfaceid": "7f780d95-7b41-45c3-ab41-4c82414a5aab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.195 251108 DEBUG nova.network.os_vif_util [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:05:a3,bridge_name='br-int',has_traffic_filtering=True,id=7f780d95-7b41-45c3-ab41-4c82414a5aab,network=Network(18ec68fc-c1ec-4eaf-93b9-386e7b0477a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7f780d95-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.196 251108 DEBUG os_vif [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:05:a3,bridge_name='br-int',has_traffic_filtering=True,id=7f780d95-7b41-45c3-ab41-4c82414a5aab,network=Network(18ec68fc-c1ec-4eaf-93b9-386e7b0477a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7f780d95-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.197 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.198 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f780d95-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.199 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.200 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.204 251108 INFO os_vif [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:05:a3,bridge_name='br-int',has_traffic_filtering=True,id=7f780d95-7b41-45c3-ab41-4c82414a5aab,network=Network(18ec68fc-c1ec-4eaf-93b9-386e7b0477a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7f780d95-7b')#033[00m
Jan 21 11:38:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791-userdata-shm.mount: Deactivated successfully.
Jan 21 11:38:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7db559ddad619e663c7ef1aec2ea019cdc88081934157a6039068bd08ff0a8a2-merged.mount: Deactivated successfully.
Jan 21 11:38:46 np0005590810 podman[268003]: 2026-01-21 16:38:46.394881623 +0000 UTC m=+0.293080757 container cleanup 2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 11:38:46 np0005590810 systemd[1]: libpod-conmon-2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791.scope: Deactivated successfully.
Jan 21 11:38:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:46 np0005590810 podman[268103]: 2026-01-21 16:38:46.469394941 +0000 UTC m=+0.046383252 container remove 2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 11:38:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.476 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[0bdd0460-aeeb-4e73-8db3-972bf5d91c16]: (4, ('Wed Jan 21 04:38:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 (2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791)\n2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791\nWed Jan 21 04:38:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 (2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791)\n2a00512d255098b5ffb4c48faeae518580b04d20051eaedc24b404baf1571791\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.479 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[6348cfc2-b5db-4e3e-bd91-0e832b33f6cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.480 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ec68fc-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.481 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 kernel: tap18ec68fc-c0: left promiscuous mode
Jan 21 11:38:46 np0005590810 nova_compute[251104]: 2026-01-21 16:38:46.496 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.499 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[aee6e7cd-1f1e-45cf-af75-4eb7e58d096c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.513 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[5902a23c-5ef6-467d-9488-f1e55529b0a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.515 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[d498884a-b515-4907-bf91-ea4927a30fd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.532 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[994c15db-899c-410f-8a9d-13e19fdf3593]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473952, 'reachable_time': 16356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268140, 'error': None, 'target': 'ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.534 163844 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.535 163844 DEBUG oslo.privsep.daemon [-] privsep: reply[0e0a1801-2012-4676-bc29-697ced793a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.535 163593 INFO neutron.agent.ovn.metadata.agent [-] Port 7f780d95-7b41-45c3-ab41-4c82414a5aab in datapath 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 unbound from our chassis#033[00m
Jan 21 11:38:46 np0005590810 systemd[1]: run-netns-ovnmeta\x2d18ec68fc\x2dc1ec\x2d4eaf\x2d93b9\x2d386e7b0477a2.mount: Deactivated successfully.
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.536 163593 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.537 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[36077f32-ec0d-4b18-a932-e0217d00bbd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.538 163593 INFO neutron.agent.ovn.metadata.agent [-] Port 7f780d95-7b41-45c3-ab41-4c82414a5aab in datapath 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2 unbound from our chassis#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.539 163593 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18ec68fc-c1ec-4eaf-93b9-386e7b0477a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 21 11:38:46 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:38:46.539 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[76eb8609-4d8b-4a68-9ec7-cb76badd8ffc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:38:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 21 11:38:46 np0005590810 podman[268158]: 2026-01-21 16:38:46.705039788 +0000 UTC m=+0.083760421 container create 60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elbakyan, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:38:46 np0005590810 podman[268158]: 2026-01-21 16:38:46.644076357 +0000 UTC m=+0.022797020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:38:46 np0005590810 systemd[1]: Started libpod-conmon-60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0.scope.
Jan 21 11:38:46 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:38:46 np0005590810 podman[268158]: 2026-01-21 16:38:46.971199236 +0000 UTC m=+0.349919899 container init 60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elbakyan, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:38:46 np0005590810 podman[268158]: 2026-01-21 16:38:46.982885454 +0000 UTC m=+0.361606087 container start 60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:38:46 np0005590810 podman[268158]: 2026-01-21 16:38:46.987086237 +0000 UTC m=+0.365806890 container attach 60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elbakyan, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:38:46 np0005590810 wonderful_elbakyan[268174]: 167 167
Jan 21 11:38:46 np0005590810 systemd[1]: libpod-60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0.scope: Deactivated successfully.
Jan 21 11:38:46 np0005590810 podman[268158]: 2026-01-21 16:38:46.991398133 +0000 UTC m=+0.370118776 container died 60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:38:47 np0005590810 systemd[1]: var-lib-containers-storage-overlay-672cddbd81e2985cc15d0f9f66a1815329e9eea7838a72dc0e0e7cebad6ad6cd-merged.mount: Deactivated successfully.
Jan 21 11:38:47 np0005590810 podman[268158]: 2026-01-21 16:38:47.038015652 +0000 UTC m=+0.416736285 container remove 60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:38:47 np0005590810 systemd[1]: libpod-conmon-60c7ca82464e7fd2830cd84aa0a19a9c3a5003b77648cd8490eec8f5cf0e0ae0.scope: Deactivated successfully.
Jan 21 11:38:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:47.184Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:38:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:47.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:38:47 np0005590810 podman[268200]: 2026-01-21 16:38:47.262856208 +0000 UTC m=+0.100064294 container create 109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sanderson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:38:47 np0005590810 podman[268200]: 2026-01-21 16:38:47.186476 +0000 UTC m=+0.023684096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:38:47 np0005590810 systemd[1]: Started libpod-conmon-109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3.scope.
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.335 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-unplugged-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.336 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.336 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.336 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.336 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] No waiting events found dispatching network-vif-unplugged-7f780d95-7b41-45c3-ab41-4c82414a5aab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.337 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-unplugged-7f780d95-7b41-45c3-ab41-4c82414a5aab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.337 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.337 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.337 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.337 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.338 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] No waiting events found dispatching network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.338 251108 WARNING nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received unexpected event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab for instance with vm_state active and task_state deleting.#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.338 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.338 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.339 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.339 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.339 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] No waiting events found dispatching network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.339 251108 WARNING nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received unexpected event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab for instance with vm_state active and task_state deleting.#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.340 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.340 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.340 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.340 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.341 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] No waiting events found dispatching network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.341 251108 WARNING nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received unexpected event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab for instance with vm_state active and task_state deleting.#033[00m
Jan 21 11:38:47 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.341 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-unplugged-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.342 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.342 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.342 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.342 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] No waiting events found dispatching network-vif-unplugged-7f780d95-7b41-45c3-ab41-4c82414a5aab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.342 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-unplugged-7f780d95-7b41-45c3-ab41-4c82414a5aab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.343 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.343 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.343 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.343 251108 DEBUG oslo_concurrency.lockutils [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.343 251108 DEBUG nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] No waiting events found dispatching network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.344 251108 WARNING nova.compute.manager [req-5572f0d6-a8f3-4f3c-8e18-921654e406b4 req-9f4e933c-c608-42dd-a52a-13fb8c54766b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Received unexpected event network-vif-plugged-7f780d95-7b41-45c3-ab41-4c82414a5aab for instance with vm_state active and task_state deleting.#033[00m
Jan 21 11:38:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87215dae85d4dd69a2936a8bbed158ee8a82e44c89e9dccfe96865b597b8d0b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87215dae85d4dd69a2936a8bbed158ee8a82e44c89e9dccfe96865b597b8d0b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87215dae85d4dd69a2936a8bbed158ee8a82e44c89e9dccfe96865b597b8d0b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:47 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87215dae85d4dd69a2936a8bbed158ee8a82e44c89e9dccfe96865b597b8d0b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:47 np0005590810 podman[268200]: 2026-01-21 16:38:47.365389759 +0000 UTC m=+0.202597935 container init 109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:38:47 np0005590810 podman[268200]: 2026-01-21 16:38:47.380525016 +0000 UTC m=+0.217733102 container start 109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sanderson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:38:47 np0005590810 podman[268200]: 2026-01-21 16:38:47.460457405 +0000 UTC m=+0.297665511 container attach 109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]: {
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:    "0": [
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:        {
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "devices": [
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "/dev/loop3"
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            ],
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "lv_name": "ceph_lv0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "lv_size": "21470642176",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "name": "ceph_lv0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "tags": {
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.cluster_name": "ceph",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.crush_device_class": "",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.encrypted": "0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.osd_id": "0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.type": "block",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.vdo": "0",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:                "ceph.with_tpm": "0"
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            },
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "type": "block",
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:            "vg_name": "ceph_vg0"
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:        }
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]:    ]
Jan 21 11:38:47 np0005590810 optimistic_sanderson[268218]: }
Jan 21 11:38:47 np0005590810 systemd[1]: libpod-109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3.scope: Deactivated successfully.
Jan 21 11:38:47 np0005590810 podman[268200]: 2026-01-21 16:38:47.702127682 +0000 UTC m=+0.539335758 container died 109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 11:38:47 np0005590810 systemd[1]: var-lib-containers-storage-overlay-87215dae85d4dd69a2936a8bbed158ee8a82e44c89e9dccfe96865b597b8d0b8-merged.mount: Deactivated successfully.
Jan 21 11:38:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:47 np0005590810 podman[268200]: 2026-01-21 16:38:47.793594284 +0000 UTC m=+0.630802370 container remove 109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:38:47 np0005590810 systemd[1]: libpod-conmon-109fc209cc48cc80ee0c71981d24c9da789bf40a96cc89ba2ab22dc80befa4c3.scope: Deactivated successfully.
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.948 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.948 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:38:47 np0005590810 nova_compute[251104]: 2026-01-21 16:38:47.948 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:38:48 np0005590810 nova_compute[251104]: 2026-01-21 16:38:48.257 251108 INFO nova.virt.libvirt.driver [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Deleting instance files /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_del#033[00m
Jan 21 11:38:48 np0005590810 nova_compute[251104]: 2026-01-21 16:38:48.259 251108 INFO nova.virt.libvirt.driver [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Deletion of /var/lib/nova/instances/b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c_del complete#033[00m
Jan 21 11:38:48 np0005590810 nova_compute[251104]: 2026-01-21 16:38:48.357 251108 INFO nova.compute.manager [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Took 2.66 seconds to destroy the instance on the hypervisor.#033[00m
Jan 21 11:38:48 np0005590810 nova_compute[251104]: 2026-01-21 16:38:48.358 251108 DEBUG oslo.service.loopingcall [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 21 11:38:48 np0005590810 nova_compute[251104]: 2026-01-21 16:38:48.358 251108 DEBUG nova.compute.manager [-] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 21 11:38:48 np0005590810 nova_compute[251104]: 2026-01-21 16:38:48.358 251108 DEBUG nova.network.neutron [-] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 21 11:38:48 np0005590810 podman[268331]: 2026-01-21 16:38:48.385605192 +0000 UTC m=+0.041353204 container create 4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 21 11:38:48 np0005590810 systemd[1]: Started libpod-conmon-4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5.scope.
Jan 21 11:38:48 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:38:48 np0005590810 podman[268331]: 2026-01-21 16:38:48.368923016 +0000 UTC m=+0.024671048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:38:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:48.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:48 np0005590810 podman[268331]: 2026-01-21 16:38:48.542092064 +0000 UTC m=+0.197840096 container init 4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:38:48 np0005590810 podman[268331]: 2026-01-21 16:38:48.550191299 +0000 UTC m=+0.205939311 container start 4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhabha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:38:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 21 11:38:48 np0005590810 dazzling_bhabha[268347]: 167 167
Jan 21 11:38:48 np0005590810 systemd[1]: libpod-4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5.scope: Deactivated successfully.
Jan 21 11:38:48 np0005590810 podman[268331]: 2026-01-21 16:38:48.564688936 +0000 UTC m=+0.220436948 container attach 4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:38:48 np0005590810 podman[268331]: 2026-01-21 16:38:48.565553153 +0000 UTC m=+0.221301165 container died 4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:38:48 np0005590810 systemd[1]: var-lib-containers-storage-overlay-6f1fb56c6fb96b48ac7dcd8d407fe08b8f82fc1e3b691f3a6f00f0d984f9b776-merged.mount: Deactivated successfully.
Jan 21 11:38:48 np0005590810 podman[268331]: 2026-01-21 16:38:48.685341938 +0000 UTC m=+0.341089950 container remove 4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 21 11:38:48 np0005590810 systemd[1]: libpod-conmon-4243eb1170d7fb08844abe3ac1270b92b7714f5338567c447f6911b5bdff43a5.scope: Deactivated successfully.
Jan 21 11:38:48 np0005590810 podman[268374]: 2026-01-21 16:38:48.853118896 +0000 UTC m=+0.045898218 container create 2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:38:48 np0005590810 podman[268374]: 2026-01-21 16:38:48.831009799 +0000 UTC m=+0.023789151 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:38:48 np0005590810 systemd[1]: Started libpod-conmon-2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a.scope.
Jan 21 11:38:48 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:38:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d392ab6bc8b417134017975ce14a593acce490a29b0ce650c9dbd41fd2e9714/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d392ab6bc8b417134017975ce14a593acce490a29b0ce650c9dbd41fd2e9714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d392ab6bc8b417134017975ce14a593acce490a29b0ce650c9dbd41fd2e9714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d392ab6bc8b417134017975ce14a593acce490a29b0ce650c9dbd41fd2e9714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:38:49 np0005590810 podman[268374]: 2026-01-21 16:38:49.056626819 +0000 UTC m=+0.249406151 container init 2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:38:49 np0005590810 podman[268374]: 2026-01-21 16:38:49.064857488 +0000 UTC m=+0.257636810 container start 2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bell, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 11:38:49 np0005590810 podman[268374]: 2026-01-21 16:38:49.069894337 +0000 UTC m=+0.262673669 container attach 2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bell, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:38:49 np0005590810 lvm[268466]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:38:49 np0005590810 lvm[268466]: VG ceph_vg0 finished
Jan 21 11:38:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:49.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:49 np0005590810 charming_bell[268391]: {}
Jan 21 11:38:49 np0005590810 systemd[1]: libpod-2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a.scope: Deactivated successfully.
Jan 21 11:38:49 np0005590810 podman[268374]: 2026-01-21 16:38:49.822056292 +0000 UTC m=+1.014835644 container died 2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:38:49 np0005590810 systemd[1]: libpod-2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a.scope: Consumed 1.153s CPU time.
Jan 21 11:38:49 np0005590810 systemd[1]: var-lib-containers-storage-overlay-4d392ab6bc8b417134017975ce14a593acce490a29b0ce650c9dbd41fd2e9714-merged.mount: Deactivated successfully.
Jan 21 11:38:50 np0005590810 podman[268374]: 2026-01-21 16:38:50.090688429 +0000 UTC m=+1.283467751 container remove 2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 11:38:50 np0005590810 systemd[1]: libpod-conmon-2509e9ad4ba5cecae829ee5cd06016049af2ceb1688468e8d77b780b7cc99b3a.scope: Deactivated successfully.
Jan 21 11:38:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:38:50 np0005590810 nova_compute[251104]: 2026-01-21 16:38:50.288 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:50.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 21 11:38:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:51 np0005590810 nova_compute[251104]: 2026-01-21 16:38:51.200 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:51.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:52.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 21 11:38:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:38:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:38:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:54.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 21 11:38:55 np0005590810 nova_compute[251104]: 2026-01-21 16:38:55.290 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:55] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:38:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:38:55] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 21 11:38:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:38:56 np0005590810 nova_compute[251104]: 2026-01-21 16:38:56.201 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:38:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:56.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 21 11:38:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:38:57.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:38:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:38:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:38:58 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 21 11:38:58 np0005590810 ceph-mon[74380]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Jan 21 11:38:58 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:38:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:38:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:38:58.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:38:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 21 11:38:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:38:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:38:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:38:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:00 np0005590810 nova_compute[251104]: 2026-01-21 16:39:00.292 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:39:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:00.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:39:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 21 11:39:00 np0005590810 podman[268492]: 2026-01-21 16:39:00.693435501 +0000 UTC m=+0.065167585 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 11:39:00 np0005590810 ceph-mds[94997]: mds.beacon.cephfs.compute-0.hjphzb missed beacon ack from the monitors
Jan 21 11:39:01 np0005590810 nova_compute[251104]: 2026-01-21 16:39:01.147 251108 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769013526.145326, b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:39:01 np0005590810 nova_compute[251104]: 2026-01-21 16:39:01.148 251108 INFO nova.compute.manager [-] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] VM Stopped (Lifecycle Event)#033[00m
Jan 21 11:39:01 np0005590810 nova_compute[251104]: 2026-01-21 16:39:01.204 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=cleanup t=2026-01-21T16:39:01.653480767Z level=info msg="Completed cleanup jobs" duration=45.642779ms
Jan 21 11:39:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=grafana.update.checker t=2026-01-21T16:39:01.742130771Z level=info msg="Update check succeeded" duration=62.367306ms
Jan 21 11:39:01 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=plugins.update.checker t=2026-01-21T16:39:01.784835798Z level=info msg="Update check succeeded" duration=106.302891ms
Jan 21 11:39:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:02.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:02 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T16:06:11.900214+0000
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : created 2026-01-21T16:02:46.356140+0000
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.dfgygz=up:active} 2 up:standby
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.ygffhs(active, since 30m), standbys: compute-2.kdxyxe, compute-1.oewgcf
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] :     mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:39:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:39:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:03.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:39:04 np0005590810 podman[268545]: 2026-01-21 16:39:04.203795042 +0000 UTC m=+0.086684903 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 21 11:39:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:04.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: mon.compute-1 calling monitor election
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: mon.compute-0 calling monitor election
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 21 11:39:04 np0005590810 ceph-mon[74380]:    mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:39:04 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:39:05 np0005590810 nova_compute[251104]: 2026-01-21 16:39:05.293 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:05] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Jan 21 11:39:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:05] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Jan 21 11:39:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:06.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:06 np0005590810 nova_compute[251104]: 2026-01-21 16:39:06.206 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:06.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:07.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:39:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:08.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:08.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 21 11:39:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:39:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:39:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:39:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:39:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:39:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:39:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:39:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:39:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:39:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:10.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:10 np0005590810 nova_compute[251104]: 2026-01-21 16:39:10.295 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:10.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:39:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:11 np0005590810 nova_compute[251104]: 2026-01-21 16:39:11.209 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:12.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:12.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:14.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:15 np0005590810 nova_compute[251104]: 2026-01-21 16:39:15.297 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:15] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Jan 21 11:39:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:15] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Jan 21 11:39:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:15 np0005590810 ovn_controller[152632]: 2026-01-21T16:39:15Z|00075|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 21 11:39:16 np0005590810 nova_compute[251104]: 2026-01-21 16:39:16.251 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:39:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:16.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:39:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:16.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:17.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:39:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:18.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:18.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:20.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:20 np0005590810 nova_compute[251104]: 2026-01-21 16:39:20.301 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 17.6976
Jan 21 11:39:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:21 np0005590810 nova_compute[251104]: 2026-01-21 16:39:21.254 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:39:22.029 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:39:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:39:22.029 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:39:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:39:22.029 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:39:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:22.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:24.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:39:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:39:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:24.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:25 np0005590810 nova_compute[251104]: 2026-01-21 16:39:25.304 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:25] "GET /metrics HTTP/1.1" 200 48446 "" "Prometheus/2.51.0"
Jan 21 11:39:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:25] "GET /metrics HTTP/1.1" 200 48446 "" "Prometheus/2.51.0"
Jan 21 11:39:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 22.6994
Jan 21 11:39:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:26 np0005590810 nova_compute[251104]: 2026-01-21 16:39:26.256 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:26.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:27.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:39:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:28.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:30.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:30 np0005590810 nova_compute[251104]: 2026-01-21 16:39:30.305 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:30.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 27.701
Jan 21 11:39:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:31 np0005590810 nova_compute[251104]: 2026-01-21 16:39:31.259 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:31 np0005590810 podman[268645]: 2026-01-21 16:39:31.68113759 +0000 UTC m=+0.053884210 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 11:39:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:32.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.763277) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013573763340, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1458, "num_deletes": 501, "total_data_size": 2083664, "memory_usage": 2116576, "flush_reason": "Manual Compaction"}
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013573781856, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 2024121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28099, "largest_seqno": 29556, "table_properties": {"data_size": 2017831, "index_size": 2980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17908, "raw_average_key_size": 20, "raw_value_size": 2002807, "raw_average_value_size": 2247, "num_data_blocks": 130, "num_entries": 891, "num_filter_entries": 891, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769013461, "oldest_key_time": 1769013461, "file_creation_time": 1769013573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 18663 microseconds, and 9719 cpu microseconds.
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.781935) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 2024121 bytes OK
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.781967) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.784121) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.784148) EVENT_LOG_v1 {"time_micros": 1769013573784140, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.784180) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2076233, prev total WAL file size 2076233, number of live WAL files 2.
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.785023) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1976KB)], [62(14MB)]
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013573785111, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 17215896, "oldest_snapshot_seqno": -1}
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5759 keys, 10982228 bytes, temperature: kUnknown
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013573867577, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10982228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10945338, "index_size": 21420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 149035, "raw_average_key_size": 25, "raw_value_size": 10842927, "raw_average_value_size": 1882, "num_data_blocks": 858, "num_entries": 5759, "num_filter_entries": 5759, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769013573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.867896) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10982228 bytes
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.869586) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.5 rd, 133.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 14.5 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(13.9) write-amplify(5.4) OK, records in: 6787, records dropped: 1028 output_compression: NoCompression
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.869607) EVENT_LOG_v1 {"time_micros": 1769013573869597, "job": 34, "event": "compaction_finished", "compaction_time_micros": 82579, "compaction_time_cpu_micros": 26538, "output_level": 6, "num_output_files": 1, "total_output_size": 10982228, "num_input_records": 6787, "num_output_records": 5759, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013573870137, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013573874263, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.784897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.874367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.874378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.874382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.874386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:39:33 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:39:33.874390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:39:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:34.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:34 np0005590810 podman[268666]: 2026-01-21 16:39:34.712512746 +0000 UTC m=+0.089973457 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:39:35 np0005590810 nova_compute[251104]: 2026-01-21 16:39:35.308 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:35] "GET /metrics HTTP/1.1" 200 48446 "" "Prometheus/2.51.0"
Jan 21 11:39:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:35] "GET /metrics HTTP/1.1" 200 48446 "" "Prometheus/2.51.0"
Jan 21 11:39:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 32.7026
Jan 21 11:39:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:36 np0005590810 nova_compute[251104]: 2026-01-21 16:39:36.261 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:39:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:36.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:39:36 np0005590810 nova_compute[251104]: 2026-01-21 16:39:36.363 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:39:36 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.ygffhs(active, since 30m), standbys: compute-1.oewgcf
Jan 21 11:39:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:37.190Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:39:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:37.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:39:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:37.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:39:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:39:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:39:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:39:39
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'vms', '.mgr', 'backups']
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:39:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:39:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:39:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:39:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:39:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:40.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:39:40 np0005590810 nova_compute[251104]: 2026-01-21 16:39:40.309 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:40 np0005590810 nova_compute[251104]: 2026-01-21 16:39:40.367 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:39:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:40.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:40 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x56491e690a80 mgrreport(mgr.compute-2.kdxyxe +0-0 packed 54) from mgr.24196 192.168.122.102:0/176768011
Jan 21 11:39:40 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 37.7076
Jan 21 11:39:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:41 np0005590810 nova_compute[251104]: 2026-01-21 16:39:41.264 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:41 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:42.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:42 np0005590810 nova_compute[251104]: 2026-01-21 16:39:42.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:39:42 np0005590810 nova_compute[251104]: 2026-01-21 16:39:42.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:39:42 np0005590810 nova_compute[251104]: 2026-01-21 16:39:42.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:39:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:42.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:42 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:43 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:44.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:44 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:45 np0005590810 nova_compute[251104]: 2026-01-21 16:39:45.311 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:45] "GET /metrics HTTP/1.1" 200 48218 "" "Prometheus/2.51.0"
Jan 21 11:39:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:45] "GET /metrics HTTP/1.1" 200 48218 "" "Prometheus/2.51.0"
Jan 21 11:39:45 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 42.7088
Jan 21 11:39:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:46 np0005590810 nova_compute[251104]: 2026-01-21 16:39:46.266 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:46.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:39:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:46.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:39:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:46 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0[105082]: logger=infra.usagestats t=2026-01-21T16:39:46.635599535Z level=info msg="Usage stats are ready to report"
Jan 21 11:39:46 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:47.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:39:47 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:39:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:48.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:39:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:48 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:49 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:50 np0005590810 nova_compute[251104]: 2026-01-21 16:39:50.313 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:50.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:50.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:50 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 47.7105
Jan 21 11:39:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:51 np0005590810 nova_compute[251104]: 2026-01-21 16:39:51.269 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:51 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:52.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:52 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:53 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:39:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:39:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:54.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:54 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:55 np0005590810 nova_compute[251104]: 2026-01-21 16:39:55.315 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:55] "GET /metrics HTTP/1.1" 200 48222 "" "Prometheus/2.51.0"
Jan 21 11:39:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:39:55] "GET /metrics HTTP/1.1" 200 48222 "" "Prometheus/2.51.0"
Jan 21 11:39:55 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 52.712
Jan 21 11:39:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:39:56 np0005590810 nova_compute[251104]: 2026-01-21 16:39:56.272 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:39:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:56.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:39:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:56.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:39:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:39:56 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:57.193Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:39:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:39:57.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:39:57 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:39:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:39:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:39:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:39:58.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:39:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:39:58 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:39:59 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:00 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 21 11:40:00 np0005590810 ceph-mon[74380]: overall HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 21 11:40:00 np0005590810 nova_compute[251104]: 2026-01-21 16:40:00.316 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:00.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:40:00 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 57.7133
Jan 21 11:40:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:01 np0005590810 nova_compute[251104]: 2026-01-21 16:40:01.275 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:01 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:02.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:40:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:40:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:02 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:45136] [POST] [200] [0.006s] [4.0B] [e815e5be-aa4b-48d2-9e2d-4873ecf46c51] /api/prometheus_receiver
Jan 21 11:40:02 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:45140] [POST] [200] [0.004s] [4.0B] [fe49322c-6e53-4cda-832f-e2a01becf51a] /api/prometheus_receiver
Jan 21 11:40:02 np0005590810 podman[268745]: 2026-01-21 16:40:02.705284942 +0000 UTC m=+0.077237166 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 21 11:40:02 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:03 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:40:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:04.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:40:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:04.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:04 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:05 np0005590810 podman[268898]: 2026-01-21 16:40:05.269407653 +0000 UTC m=+0.119136205 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 11:40:05 np0005590810 nova_compute[251104]: 2026-01-21 16:40:05.319 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:40:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:40:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:05 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 11:40:05 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:40:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:05] "GET /metrics HTTP/1.1" 200 48222 "" "Prometheus/2.51.0"
Jan 21 11:40:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:05] "GET /metrics HTTP/1.1" 200 48222 "" "Prometheus/2.51.0"
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 no beacon from mds.0.4 (gid: 24157 addr: [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] state: up:active) since 62.7146
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9  marking 24157 [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] mds.0.4 up:active laggy
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9  replacing 24157 [v2:192.168.122.102:6804/3127718308,v1:192.168.122.102:6805/3127718308] mds.0.4 up:active with 14436/cephfs.compute-0.hjphzb [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669]
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Replacing daemon mds.cephfs.compute-2.dfgygz as rank 0 with standby daemon mds.cephfs.compute-0.hjphzb
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e9 fail_mds_gid 24157 mds.cephfs.compute-2.dfgygz role 0
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : MDS daemon mds.cephfs.compute-2.dfgygz is removed because it is dead or otherwise unavailable.
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e10 new map
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2026-01-21T16:40:06:270221+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01110#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:40:06.270219+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#011147#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14436}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.hjphzb{0:14436} state up:replay seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.akvqho{-1:34133} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 21 11:40:06 np0005590810 nova_compute[251104]: 2026-01-21 16:40:06.278 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:06 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb Updating MDS map to version 10 from mon.0
Jan 21 11:40:06 np0005590810 ceph-mds[94997]: mds.0.10 handle_mds_map I am now mds.0.10
Jan 21 11:40:06 np0005590810 ceph-mds[94997]: mds.0.10 handle_mds_map state change up:standby --> up:replay
Jan 21 11:40:06 np0005590810 ceph-mds[94997]: mds.0.10 replay_start
Jan 21 11:40:06 np0005590810 ceph-mds[94997]: mds.0.10  waiting for osdmap 147 (which blocklists prior instance)
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.hjphzb=up:replay} 1 up:standby
Jan 21 11:40:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:06.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:06 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:40:06 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:06 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.cache creating system inode with ino:0x100
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.cache creating system inode with ino:0x1
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.10 Finished replaying journal
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.10 making mds journal writeable
Jan 21 11:40:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:07.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: Replacing daemon mds.cephfs.compute-2.dfgygz as rank 0 with standby daemon mds.cephfs.compute-0.hjphzb
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: MDS daemon mds.cephfs.compute-2.dfgygz is removed because it is dead or otherwise unavailable.
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:07 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e11 new map
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb Updating MDS map to version 11 from mon.0
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.10 handle_mds_map I am now mds.0.10
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.10 handle_mds_map state change up:replay --> up:reconnect
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.10 reconnect_start
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.10 reopen_log
Jan 21 11:40:07 np0005590810 ceph-mds[94997]: mds.0.10 reconnect_done
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e11 print_map#012e11#012btime 2026-01-21T16:40:07:668645+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01111#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:40:07.047376+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#011147#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14436}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.hjphzb{0:14436} state up:reconnect seq 499 join_fscid=1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.akvqho{-1:34133} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] up:reconnect
Jan 21 11:40:07 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.hjphzb=up:reconnect} 1 up:standby
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 11:40:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:08.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 21 11:40:08 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e12 new map
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e12 print_map#012e12#012btime 2026-01-21T16:40:08:932464+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:40:07.935765+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#011147#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14436}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.hjphzb{0:14436} state up:rejoin seq 500 join_fscid=1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.akvqho{-1:34133} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:40:08 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb Updating MDS map to version 12 from mon.0
Jan 21 11:40:08 np0005590810 ceph-mds[94997]: mds.0.10 handle_mds_map I am now mds.0.10
Jan 21 11:40:08 np0005590810 ceph-mds[94997]: mds.0.10 handle_mds_map state change up:reconnect --> up:rejoin
Jan 21 11:40:08 np0005590810 ceph-mds[94997]: mds.0.10 rejoin_start
Jan 21 11:40:08 np0005590810 ceph-mds[94997]: mds.0.10 rejoin_joint_start
Jan 21 11:40:08 np0005590810 ceph-mds[94997]: mds.0.10 rejoin_done
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] up:rejoin
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.hjphzb=up:rejoin} 1 up:standby
Jan 21 11:40:08 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.hjphzb is now active in filesystem cephfs as rank 0
Jan 21 11:40:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 21 11:40:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:40:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:40:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:40:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:40:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:40:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:40:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:40:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.719 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.719 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.720 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.720 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.721 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.721 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.721 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.721 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.722 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.774 251108 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 61.68 sec#033[00m
Jan 21 11:40:09 np0005590810 nova_compute[251104]: 2026-01-21 16:40:09.780 251108 DEBUG nova.compute.manager [None req-92016243-1fbe-498f-8ce3-2298b3e0931e - - - - - -] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:40:09 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:09 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Jan 21 11:40:09 np0005590810 ceph-mon[74380]: daemon mds.cephfs.compute-0.hjphzb is now active in filesystem cephfs as rank 0
Jan 21 11:40:09 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 new map
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 print_map#012e13#012btime 2026-01-21T16:40:09:984110+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:40:09.984107+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#011147#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14436}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14436 members: 14436#012[mds.cephfs.compute-0.hjphzb{0:14436} state up:active seq 501 join_fscid=1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.akvqho{-1:34133} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:40:10 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb Updating MDS map to version 13 from mon.0
Jan 21 11:40:10 np0005590810 ceph-mds[94997]: mds.0.10 handle_mds_map I am now mds.0.10
Jan 21 11:40:10 np0005590810 ceph-mds[94997]: mds.0.10 handle_mds_map state change up:rejoin --> up:active
Jan 21 11:40:10 np0005590810 ceph-mds[94997]: mds.0.10 recovery_done -- successful recovery!
Jan 21 11:40:10 np0005590810 ceph-mds[94997]: mds.0.10 active_start
Jan 21 11:40:10 np0005590810 ceph-mds[94997]: mds.0.10 cluster recovered.
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] up:active
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.hjphzb=up:active} 1 up:standby
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.123 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.123 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.123 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.124 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.124 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:10 np0005590810 ceph-mds[94997]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 21 11:40:10 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mds-cephfs-compute-0-hjphzb[94993]: 2026-01-21T16:40:10.155+0000 7f4ffeb1a640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 21 11:40:10 np0005590810 ceph-mgr[74671]: ms_deliver_dispatch: unhandled message 0x56491deb2380 mgrreport(mds.cephfs.compute-2.dfgygz +0-0 packed 1638) from mds.0 v2:192.168.122.102:6804/3127718308
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:10 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:10 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:10 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.320 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2928754681' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2928754681' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 11:40:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:10.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 9 op/s
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971204565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.646 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.850 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.851 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4610MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.852 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:40:10 np0005590810 nova_compute[251104]: 2026-01-21 16:40:10.852 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:40:10 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3175931677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Jan 21 11:40:10 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:11 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:11 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: 24 slow ops, oldest one blocked for 75 sec, mon.compute-2 has slow ops (SLOW_OPS)
Jan 21 11:40:11 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:11 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:11 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:11 np0005590810 nova_compute[251104]: 2026-01-21 16:40:11.281 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:11 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:12 np0005590810 ceph-mon[74380]: Health check failed: 24 slow ops, oldest one blocked for 75 sec, mon.compute-2 has slow ops (SLOW_OPS)
Jan 21 11:40:12 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:12 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:12 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:12 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:12.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:12.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 9 op/s
Jan 21 11:40:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:12.644Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:40:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:12.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:12 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:12.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:12 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:13 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:13 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:13 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:13 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:13 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:14 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:14 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:14 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:14 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:14.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:14.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 9 op/s
Jan 21 11:40:14 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:15 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:15 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:15 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:15 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:15 np0005590810 nova_compute[251104]: 2026-01-21 16:40:15.321 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:15] "GET /metrics HTTP/1.1" 200 48039 "" "Prometheus/2.51.0"
Jan 21 11:40:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:15] "GET /metrics HTTP/1.1" 200 48039 "" "Prometheus/2.51.0"
Jan 21 11:40:15 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:16 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:16 np0005590810 nova_compute[251104]: 2026-01-21 16:40:16.284 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:16 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:16 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:16.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:16.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 8 op/s
Jan 21 11:40:16 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:17 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:17 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:17 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:17 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:17.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:17 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 80 sec, mon.compute-2 has slow ops (SLOW_OPS)
Jan 21 11:40:17 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:18 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:18 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:18 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:18 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:18 np0005590810 ceph-mon[74380]: Health check update: 26 slow ops, oldest one blocked for 80 sec, mon.compute-2 has slow ops (SLOW_OPS)
Jan 21 11:40:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:18.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 7 op/s
Jan 21 11:40:18 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:19 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:19 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: paxos.0).electionLogic(19) init, last seen epoch 19, mid-election, bumping
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsid d9745984-fea8-5195-8ec5-61f685b5c785
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T16:06:11.900214+0000
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : created 2026-01-21T16:02:46.356140+0000
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.hjphzb=up:active} 1 up:standby
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.ygffhs(active, since 31m), standbys: compute-1.oewgcf
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 21 11:40:19 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 26 slow ops, oldest one blocked for 80 sec, mon.compute-2 has slow ops
Jan 21 11:40:19 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 26 slow ops, oldest one blocked for 80 sec, mon.compute-2 has slow ops
Jan 21 11:40:20 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.dfgygz v2:192.168.122.102:6804/3127718308; not ready for session (expect reconnect)
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e13 all = 0
Jan 21 11:40:20 np0005590810 ceph-mgr[74671]: mgr finish mon failed to return metadata for mds.cephfs.compute-2.dfgygz: (22) Invalid argument
Jan 21 11:40:20 np0005590810 nova_compute[251104]: 2026-01-21 16:40:20.323 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:20.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:20.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 8 op/s
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: OSD bench result of 2816.216695 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: mon.compute-0 calling monitor election
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: Health detail: HEALTH_WARN 26 slow ops, oldest one blocked for 80 sec, mon.compute-2 has slow ops
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: [WRN] SLOW_OPS: 26 slow ops, oldest one blocked for 80 sec, mon.compute-2 has slow ops
Jan 21 11:40:20 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check update: 30 slow ops, oldest one blocked for 88 sec, mon.compute-2 has slow ops (SLOW_OPS)
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e14 new map
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e14 print_map#012e14#012btime 2026-01-21T16:40:20:965556+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T16:05:57.396255+0000#012modified#0112026-01-21T16:40:09.984107+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#011147#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14436}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14436 members: 14436#012[mds.cephfs.compute-0.hjphzb{0:14436} state up:active seq 501 join_fscid=1 addr [v2:192.168.122.100:6806/2677667669,v1:192.168.122.100:6807/2677667669] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.dfgygz{-1:25594} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1985986064,v1:192.168.122.102:6805/1985986064] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.akvqho{-1:34133} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/420177392,v1:192.168.122.101:6805/420177392] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1985986064,v1:192.168.122.102:6805/1985986064] up:boot
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.hjphzb=up:active} 2 up:standby
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"} v 0)
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dfgygz"}]: dispatch
Jan 21 11:40:20 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).mds e14 all = 0
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdxyxe started
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:21 np0005590810 nova_compute[251104]: 2026-01-21 16:40:21.286 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: Health check update: 30 slow ops, oldest one blocked for 88 sec, mon.compute-2 has slow ops (SLOW_OPS)
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:21 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:40:21 np0005590810 ceph-mgr[74671]: mgr.server handle_open ignoring open from mgr.compute-2.kdxyxe 192.168.122.102:0/176768011; not ready for session (expect reconnect)
Jan 21 11:40:21 np0005590810 podman[269115]: 2026-01-21 16:40:21.935496376 +0000 UTC m=+0.051406224 container create 9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 11:40:21 np0005590810 systemd[1]: Started libpod-conmon-9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9.scope.
Jan 21 11:40:22 np0005590810 podman[269115]: 2026-01-21 16:40:21.913502114 +0000 UTC m=+0.029411972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:40:22 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:40:22 np0005590810 podman[269115]: 2026-01-21 16:40:22.02817438 +0000 UTC m=+0.144084268 container init 9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banach, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Jan 21 11:40:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:40:22.030 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:40:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:40:22.031 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:40:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:40:22.031 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:40:22 np0005590810 podman[269115]: 2026-01-21 16:40:22.037102017 +0000 UTC m=+0.153011875 container start 9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 11:40:22 np0005590810 podman[269115]: 2026-01-21 16:40:22.041993199 +0000 UTC m=+0.157903087 container attach 9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banach, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:40:22 np0005590810 admiring_banach[269131]: 167 167
Jan 21 11:40:22 np0005590810 systemd[1]: libpod-9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9.scope: Deactivated successfully.
Jan 21 11:40:22 np0005590810 podman[269115]: 2026-01-21 16:40:22.045449616 +0000 UTC m=+0.161359454 container died 9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:40:22 np0005590810 ceph-mon[74380]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.ygffhs(active, since 31m), standbys: compute-2.kdxyxe, compute-1.oewgcf
Jan 21 11:40:22 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.kdxyxe", "id": "compute-2.kdxyxe"} v 0)
Jan 21 11:40:22 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kdxyxe", "id": "compute-2.kdxyxe"}]: dispatch
Jan 21 11:40:22 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3c87f666f900d908df90fe53e8e5394c5569f813fa4ae7115de8efb94acdc8ed-merged.mount: Deactivated successfully.
Jan 21 11:40:22 np0005590810 podman[269115]: 2026-01-21 16:40:22.089518183 +0000 UTC m=+0.205428031 container remove 9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:40:22 np0005590810 systemd[1]: libpod-conmon-9f757d150f876d32f97f23b79b9f18b85b3d90c8a1c7d2fb71c176cd119e30c9.scope: Deactivated successfully.
Jan 21 11:40:22 np0005590810 podman[269154]: 2026-01-21 16:40:22.270365641 +0000 UTC m=+0.047620568 container create 036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:40:22 np0005590810 systemd[1]: Started libpod-conmon-036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be.scope.
Jan 21 11:40:22 np0005590810 podman[269154]: 2026-01-21 16:40:22.247319506 +0000 UTC m=+0.024574443 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:40:22 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:40:22 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92a69ff8207dc1b0b76fadb7b8d17eedc951dd77aebc1ab02516188dc83a916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:22 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92a69ff8207dc1b0b76fadb7b8d17eedc951dd77aebc1ab02516188dc83a916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:22 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92a69ff8207dc1b0b76fadb7b8d17eedc951dd77aebc1ab02516188dc83a916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:22 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92a69ff8207dc1b0b76fadb7b8d17eedc951dd77aebc1ab02516188dc83a916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:22 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92a69ff8207dc1b0b76fadb7b8d17eedc951dd77aebc1ab02516188dc83a916/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:22 np0005590810 podman[269154]: 2026-01-21 16:40:22.372315312 +0000 UTC m=+0.149570249 container init 036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_antonelli, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:40:22 np0005590810 podman[269154]: 2026-01-21 16:40:22.378438012 +0000 UTC m=+0.155692929 container start 036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 21 11:40:22 np0005590810 podman[269154]: 2026-01-21 16:40:22.382386035 +0000 UTC m=+0.159641052 container attach 036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_antonelli, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:40:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:22.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:22.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Jan 21 11:40:22 np0005590810 nova_compute[251104]: 2026-01-21 16:40:22.629 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Instance b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 21 11:40:22 np0005590810 nova_compute[251104]: 2026-01-21 16:40:22.630 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:40:22 np0005590810 nova_compute[251104]: 2026-01-21 16:40:22.630 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:40:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:22.644Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:40:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:22.644Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:40:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:22.645Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:40:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:22.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:22 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:22.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:22 np0005590810 affectionate_antonelli[269171]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:40:22 np0005590810 affectionate_antonelli[269171]: --> All data devices are unavailable
Jan 21 11:40:22 np0005590810 systemd[1]: libpod-036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be.scope: Deactivated successfully.
Jan 21 11:40:22 np0005590810 conmon[269171]: conmon 036b2edce0874335b7d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be.scope/container/memory.events
Jan 21 11:40:22 np0005590810 podman[269154]: 2026-01-21 16:40:22.75469939 +0000 UTC m=+0.531954317 container died 036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:40:22 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d92a69ff8207dc1b0b76fadb7b8d17eedc951dd77aebc1ab02516188dc83a916-merged.mount: Deactivated successfully.
Jan 21 11:40:22 np0005590810 podman[269154]: 2026-01-21 16:40:22.795936838 +0000 UTC m=+0.573191755 container remove 036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 21 11:40:22 np0005590810 nova_compute[251104]: 2026-01-21 16:40:22.807 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:40:22 np0005590810 systemd[1]: libpod-conmon-036b2edce0874335b7d083760bf7db24c51526a4b7fcc349e3028f09e5d213be.scope: Deactivated successfully.
Jan 21 11:40:22 np0005590810 nova_compute[251104]: 2026-01-21 16:40:22.908 251108 DEBUG nova.network.neutron [-] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:40:23 np0005590810 nova_compute[251104]: 2026-01-21 16:40:23.189 251108 INFO nova.compute.manager [-] [instance: b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c] Took 94.83 seconds to deallocate network for instance.#033[00m
Jan 21 11:40:23 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:40:23 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3970507306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:40:23 np0005590810 nova_compute[251104]: 2026-01-21 16:40:23.281 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:40:23 np0005590810 nova_compute[251104]: 2026-01-21 16:40:23.288 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:40:23 np0005590810 podman[269312]: 2026-01-21 16:40:23.35971075 +0000 UTC m=+0.036509013 container create da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:40:23 np0005590810 systemd[1]: Started libpod-conmon-da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db.scope.
Jan 21 11:40:23 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:40:23 np0005590810 podman[269312]: 2026-01-21 16:40:23.343712784 +0000 UTC m=+0.020511067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:40:23 np0005590810 nova_compute[251104]: 2026-01-21 16:40:23.442 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:40:23 np0005590810 podman[269312]: 2026-01-21 16:40:23.453670714 +0000 UTC m=+0.130468997 container init da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 21 11:40:23 np0005590810 podman[269312]: 2026-01-21 16:40:23.461332791 +0000 UTC m=+0.138131054 container start da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:40:23 np0005590810 podman[269312]: 2026-01-21 16:40:23.464553111 +0000 UTC m=+0.141351374 container attach da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:40:23 np0005590810 reverent_satoshi[269328]: 167 167
Jan 21 11:40:23 np0005590810 systemd[1]: libpod-da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db.scope: Deactivated successfully.
Jan 21 11:40:23 np0005590810 podman[269312]: 2026-01-21 16:40:23.466199802 +0000 UTC m=+0.142998055 container died da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_satoshi, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:40:23 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c2e3e429757cbc8c1b44963b46dd27de8e33979d284c54005349e04e549cc323-merged.mount: Deactivated successfully.
Jan 21 11:40:23 np0005590810 podman[269312]: 2026-01-21 16:40:23.502591761 +0000 UTC m=+0.179390024 container remove da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:40:23 np0005590810 systemd[1]: libpod-conmon-da98a93be7da3475047fdc5b057f5254e7d1df9fba3e49a14e6d35541aa641db.scope: Deactivated successfully.
Jan 21 11:40:23 np0005590810 podman[269352]: 2026-01-21 16:40:23.691118207 +0000 UTC m=+0.048628119 container create 7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cartwright, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:40:23 np0005590810 systemd[1]: Started libpod-conmon-7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083.scope.
Jan 21 11:40:23 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:40:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429f2b61db6eaa4a8ddd7762fea7c0ba2f26b5f934a4fa77c8382e34ba552315/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429f2b61db6eaa4a8ddd7762fea7c0ba2f26b5f934a4fa77c8382e34ba552315/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429f2b61db6eaa4a8ddd7762fea7c0ba2f26b5f934a4fa77c8382e34ba552315/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:23 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429f2b61db6eaa4a8ddd7762fea7c0ba2f26b5f934a4fa77c8382e34ba552315/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:23 np0005590810 podman[269352]: 2026-01-21 16:40:23.668521626 +0000 UTC m=+0.026031558 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:40:23 np0005590810 podman[269352]: 2026-01-21 16:40:23.775244586 +0000 UTC m=+0.132754948 container init 7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:40:23 np0005590810 nova_compute[251104]: 2026-01-21 16:40:23.774 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:40:23 np0005590810 podman[269352]: 2026-01-21 16:40:23.785500183 +0000 UTC m=+0.143010075 container start 7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cartwright, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:40:23 np0005590810 podman[269352]: 2026-01-21 16:40:23.788974691 +0000 UTC m=+0.146484583 container attach 7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cartwright, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 21 11:40:23 np0005590810 nova_compute[251104]: 2026-01-21 16:40:23.949 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:40:23 np0005590810 nova_compute[251104]: 2026-01-21 16:40:23.950 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 13.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:40:23 np0005590810 nova_compute[251104]: 2026-01-21 16:40:23.950 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]: {
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:    "0": [
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:        {
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "devices": [
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "/dev/loop3"
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            ],
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "lv_name": "ceph_lv0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "lv_size": "21470642176",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "name": "ceph_lv0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "tags": {
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.cluster_name": "ceph",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.crush_device_class": "",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.encrypted": "0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.osd_id": "0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.type": "block",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.vdo": "0",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:                "ceph.with_tpm": "0"
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            },
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "type": "block",
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:            "vg_name": "ceph_vg0"
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:        }
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]:    ]
Jan 21 11:40:24 np0005590810 priceless_cartwright[269368]: }
Jan 21 11:40:24 np0005590810 systemd[1]: libpod-7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083.scope: Deactivated successfully.
Jan 21 11:40:24 np0005590810 podman[269352]: 2026-01-21 16:40:24.105568879 +0000 UTC m=+0.463078771 container died 7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:40:24 np0005590810 systemd[1]: var-lib-containers-storage-overlay-429f2b61db6eaa4a8ddd7762fea7c0ba2f26b5f934a4fa77c8382e34ba552315-merged.mount: Deactivated successfully.
Jan 21 11:40:24 np0005590810 podman[269352]: 2026-01-21 16:40:24.161966038 +0000 UTC m=+0.519475930 container remove 7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:40:24 np0005590810 systemd[1]: libpod-conmon-7504731bdf2f45092c8f7313cd77cf6bcd3b3f70c44f88e40094cf0f7af12083.scope: Deactivated successfully.
Jan 21 11:40:24 np0005590810 nova_compute[251104]: 2026-01-21 16:40:24.205 251108 DEBUG oslo_concurrency.processutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:40:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 21 11:40:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:40:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:40:24 np0005590810 nova_compute[251104]: 2026-01-21 16:40:24.388 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:40:24.388 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:40:24 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:40:24.389 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:40:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:24.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:24.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Jan 21 11:40:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:40:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/568242155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:40:24 np0005590810 nova_compute[251104]: 2026-01-21 16:40:24.663 251108 DEBUG oslo_concurrency.processutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:40:24 np0005590810 nova_compute[251104]: 2026-01-21 16:40:24.670 251108 DEBUG nova.compute.provider_tree [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:40:24 np0005590810 nova_compute[251104]: 2026-01-21 16:40:24.700 251108 DEBUG nova.scheduler.client.report [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:40:24 np0005590810 podman[269528]: 2026-01-21 16:40:24.807368961 +0000 UTC m=+0.045403308 container create 8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hertz, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:40:24 np0005590810 systemd[1]: Started libpod-conmon-8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547.scope.
Jan 21 11:40:24 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:40:24 np0005590810 podman[269528]: 2026-01-21 16:40:24.785498343 +0000 UTC m=+0.023532710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:40:24 np0005590810 podman[269528]: 2026-01-21 16:40:24.891428087 +0000 UTC m=+0.129462484 container init 8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hertz, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 21 11:40:24 np0005590810 podman[269528]: 2026-01-21 16:40:24.897647631 +0000 UTC m=+0.135681978 container start 8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hertz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:40:24 np0005590810 podman[269528]: 2026-01-21 16:40:24.900988554 +0000 UTC m=+0.139022901 container attach 8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hertz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:40:24 np0005590810 trusting_hertz[269545]: 167 167
Jan 21 11:40:24 np0005590810 systemd[1]: libpod-8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547.scope: Deactivated successfully.
Jan 21 11:40:24 np0005590810 podman[269528]: 2026-01-21 16:40:24.90437281 +0000 UTC m=+0.142407157 container died 8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:40:24 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d6445f1cfd628f5f67efccbab1d24b9dae2895f9f0ec628c1d0648a6244dfe29-merged.mount: Deactivated successfully.
Jan 21 11:40:24 np0005590810 podman[269528]: 2026-01-21 16:40:24.946674201 +0000 UTC m=+0.184708538 container remove 8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hertz, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:40:24 np0005590810 systemd[1]: libpod-conmon-8bec435bd8d97cadae54b3a16d7e4035f5e2a0eccc13e61470eeea65c6d36547.scope: Deactivated successfully.
Jan 21 11:40:25 np0005590810 podman[269568]: 2026-01-21 16:40:25.112989138 +0000 UTC m=+0.047644728 container create 59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:40:25 np0005590810 nova_compute[251104]: 2026-01-21 16:40:25.124 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:40:25 np0005590810 systemd[1]: Started libpod-conmon-59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2.scope.
Jan 21 11:40:25 np0005590810 podman[269568]: 2026-01-21 16:40:25.08822224 +0000 UTC m=+0.022877860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:40:25 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:40:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/118396234af621ca52ac2c4ee9904f66f34557ec611efef211899f8a31695dad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/118396234af621ca52ac2c4ee9904f66f34557ec611efef211899f8a31695dad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/118396234af621ca52ac2c4ee9904f66f34557ec611efef211899f8a31695dad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:25 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/118396234af621ca52ac2c4ee9904f66f34557ec611efef211899f8a31695dad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:40:25 np0005590810 podman[269568]: 2026-01-21 16:40:25.205717874 +0000 UTC m=+0.140373434 container init 59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_agnesi, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:40:25 np0005590810 podman[269568]: 2026-01-21 16:40:25.214743724 +0000 UTC m=+0.149399274 container start 59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 21 11:40:25 np0005590810 podman[269568]: 2026-01-21 16:40:25.218055967 +0000 UTC m=+0.152711547 container attach 59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_agnesi, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:40:25 np0005590810 nova_compute[251104]: 2026-01-21 16:40:25.325 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:25 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:25 np0005590810 nova_compute[251104]: 2026-01-21 16:40:25.388 251108 INFO nova.scheduler.client.report [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Deleted allocations for instance b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c#033[00m
Jan 21 11:40:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:25] "GET /metrics HTTP/1.1" 200 48664 "" "Prometheus/2.51.0"
Jan 21 11:40:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:25] "GET /metrics HTTP/1.1" 200 48664 "" "Prometheus/2.51.0"
Jan 21 11:40:25 np0005590810 lvm[269659]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:40:25 np0005590810 lvm[269659]: VG ceph_vg0 finished
Jan 21 11:40:25 np0005590810 elastic_agnesi[269584]: {}
Jan 21 11:40:25 np0005590810 systemd[1]: libpod-59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2.scope: Deactivated successfully.
Jan 21 11:40:25 np0005590810 podman[269568]: 2026-01-21 16:40:25.970008804 +0000 UTC m=+0.904664354 container died 59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 11:40:25 np0005590810 systemd[1]: libpod-59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2.scope: Consumed 1.212s CPU time.
Jan 21 11:40:26 np0005590810 systemd[1]: var-lib-containers-storage-overlay-118396234af621ca52ac2c4ee9904f66f34557ec611efef211899f8a31695dad-merged.mount: Deactivated successfully.
Jan 21 11:40:26 np0005590810 podman[269568]: 2026-01-21 16:40:26.021482661 +0000 UTC m=+0.956138211 container remove 59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:40:26 np0005590810 systemd[1]: libpod-conmon-59a32869fbff362fc01baf69167e7c34913d70d95e034ec9000ba7c54d8e09c2.scope: Deactivated successfully.
Jan 21 11:40:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:40:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:40:26 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:26 np0005590810 nova_compute[251104]: 2026-01-21 16:40:26.289 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:26 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:26 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:26 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:40:26.391 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:40:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:40:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:26.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:40:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:26.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s
Jan 21 11:40:27 np0005590810 nova_compute[251104]: 2026-01-21 16:40:27.068 251108 DEBUG oslo_concurrency.lockutils [None req-eaad7091-ebda-4b7f-9b42-8fe00d3c3d46 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "b1bd1aca-19d5-4cbd-ab8e-10e71e91c66c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 101.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:40:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:27.196Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:40:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:27.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:27 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 30 slow ops, oldest one blocked for 88 sec, mon.compute-2 has slow ops)
Jan 21 11:40:27 np0005590810 ceph-mon[74380]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 11:40:27 np0005590810 ceph-mon[74380]: Health check cleared: SLOW_OPS (was: 30 slow ops, oldest one blocked for 88 sec, mon.compute-2 has slow ops)
Jan 21 11:40:27 np0005590810 ceph-mon[74380]: Cluster is now healthy
Jan 21 11:40:27 np0005590810 nova_compute[251104]: 2026-01-21 16:40:27.944 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:28.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:28.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:30 np0005590810 nova_compute[251104]: 2026-01-21 16:40:30.326 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:30.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:30.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:40:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:31 np0005590810 nova_compute[251104]: 2026-01-21 16:40:31.292 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:32.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:32.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:32.646Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:40:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:32.646Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:40:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:32.646Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:32 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:32.646Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:40:32 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:60508] [POST] [200] [0.004s] [4.0B] [7ddaea63-7108-41ea-b1d4-9524050c947a] /api/prometheus_receiver
Jan 21 11:40:32 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:60496] [POST] [200] [0.002s] [4.0B] [ffba38ba-bd41-4214-a0e9-5d9d48b579a6] /api/prometheus_receiver
Jan 21 11:40:33 np0005590810 podman[269707]: 2026-01-21 16:40:33.696431496 +0000 UTC m=+0.065622185 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 11:40:34 np0005590810 nova_compute[251104]: 2026-01-21 16:40:34.174 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:34 np0005590810 nova_compute[251104]: 2026-01-21 16:40:34.258 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:34.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:34.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:35 np0005590810 nova_compute[251104]: 2026-01-21 16:40:35.328 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:35] "GET /metrics HTTP/1.1" 200 48664 "" "Prometheus/2.51.0"
Jan 21 11:40:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:35] "GET /metrics HTTP/1.1" 200 48664 "" "Prometheus/2.51.0"
Jan 21 11:40:35 np0005590810 podman[269729]: 2026-01-21 16:40:35.712480463 +0000 UTC m=+0.093449250 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 21 11:40:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:36 np0005590810 nova_compute[251104]: 2026-01-21 16:40:36.294 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:36 np0005590810 nova_compute[251104]: 2026-01-21 16:40:36.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:36 np0005590810 nova_compute[251104]: 2026-01-21 16:40:36.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 21 11:40:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:36.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:36.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:40:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:37.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:40:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:37.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:40:37 np0005590810 nova_compute[251104]: 2026-01-21 16:40:37.380 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:38.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:40:39
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'default.rgw.log']
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:40:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 21 11:40:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:40:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:40:39 np0005590810 nova_compute[251104]: 2026-01-21 16:40:39.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:40:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:40:40 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:40:40 np0005590810 nova_compute[251104]: 2026-01-21 16:40:40.330 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:40 np0005590810 nova_compute[251104]: 2026-01-21 16:40:40.389 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:40.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:40.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:40:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:41 np0005590810 nova_compute[251104]: 2026-01-21 16:40:41.297 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:42.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:42.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:42 np0005590810 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 21 11:40:43 np0005590810 nova_compute[251104]: 2026-01-21 16:40:43.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:43 np0005590810 nova_compute[251104]: 2026-01-21 16:40:43.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:40:43 np0005590810 nova_compute[251104]: 2026-01-21 16:40:43.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:40:43 np0005590810 nova_compute[251104]: 2026-01-21 16:40:43.400 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:40:43 np0005590810 nova_compute[251104]: 2026-01-21 16:40:43.401 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:44 np0005590810 nova_compute[251104]: 2026-01-21 16:40:44.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:44.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:44.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:45 np0005590810 nova_compute[251104]: 2026-01-21 16:40:45.331 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:45 np0005590810 nova_compute[251104]: 2026-01-21 16:40:45.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:45] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:40:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:45] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:40:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:46 np0005590810 nova_compute[251104]: 2026-01-21 16:40:46.299 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:40:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:46.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:40:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:40:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:46.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:47.200Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:40:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:47.200Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:40:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:47.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.368 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.396 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.396 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.396 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.396 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.397 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:40:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:40:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751054042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:40:47 np0005590810 nova_compute[251104]: 2026-01-21 16:40:47.873 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.062 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.063 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4559MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.064 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.064 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.171 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.172 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.241 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:40:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:40:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:48.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:40:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:40:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:48.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:40:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:40:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2342394140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.745 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.750 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.770 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.806 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:40:48 np0005590810 nova_compute[251104]: 2026-01-21 16:40:48.807 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:40:50 np0005590810 nova_compute[251104]: 2026-01-21 16:40:50.332 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:50.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:40:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:50.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:51 np0005590810 nova_compute[251104]: 2026-01-21 16:40:51.302 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:51 np0005590810 nova_compute[251104]: 2026-01-21 16:40:51.807 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:40:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:40:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:52.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:53 np0005590810 nova_compute[251104]: 2026-01-21 16:40:53.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:40:53 np0005590810 nova_compute[251104]: 2026-01-21 16:40:53.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 21 11:40:53 np0005590810 nova_compute[251104]: 2026-01-21 16:40:53.398 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 21 11:40:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:40:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:40:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:54.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:54.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:55 np0005590810 nova_compute[251104]: 2026-01-21 16:40:55.335 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:55] "GET /metrics HTTP/1.1" 200 48661 "" "Prometheus/2.51.0"
Jan 21 11:40:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:40:55] "GET /metrics HTTP/1.1" 200 48661 "" "Prometheus/2.51.0"
Jan 21 11:40:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:40:56 np0005590810 nova_compute[251104]: 2026-01-21 16:40:56.304 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:40:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:56.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:40:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:56.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:40:57.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:40:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:40:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:40:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:40:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:40:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:40:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:40:58.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:00 np0005590810 nova_compute[251104]: 2026-01-21 16:41:00.337 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:41:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:00.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:01 np0005590810 nova_compute[251104]: 2026-01-21 16:41:01.307 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:02.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:41:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:02.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:04.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:41:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:04.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:04 np0005590810 podman[269856]: 2026-01-21 16:41:04.707964878 +0000 UTC m=+0.075048988 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 21 11:41:05 np0005590810 nova_compute[251104]: 2026-01-21 16:41:05.339 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:05] "GET /metrics HTTP/1.1" 200 48661 "" "Prometheus/2.51.0"
Jan 21 11:41:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:05] "GET /metrics HTTP/1.1" 200 48661 "" "Prometheus/2.51.0"
Jan 21 11:41:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:06 np0005590810 nova_compute[251104]: 2026-01-21 16:41:06.310 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:06.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:41:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:06.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:06 np0005590810 podman[269899]: 2026-01-21 16:41:06.702844908 +0000 UTC m=+0.080226199 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 11:41:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:41:07.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:41:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:08.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:41:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:08.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:41:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:41:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:41:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:41:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:41:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:41:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:41:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:41:10 np0005590810 nova_compute[251104]: 2026-01-21 16:41:10.341 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:10 np0005590810 ovn_controller[152632]: 2026-01-21T16:41:10Z|00076|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 21 11:41:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:10.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 21 11:41:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:10.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:11 np0005590810 nova_compute[251104]: 2026-01-21 16:41:11.313 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 21 11:41:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 21 11:41:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:14.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:14.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:15 np0005590810 nova_compute[251104]: 2026-01-21 16:41:15.343 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:15] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:41:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:15] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:41:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:16 np0005590810 nova_compute[251104]: 2026-01-21 16:41:16.315 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 21 11:41:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:16.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:41:17.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:41:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 21 11:41:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:18.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:18.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:20 np0005590810 nova_compute[251104]: 2026-01-21 16:41:20.346 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 21 11:41:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:20.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:21 np0005590810 nova_compute[251104]: 2026-01-21 16:41:21.318 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:41:22.031 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:41:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:41:22.032 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:41:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:41:22.032 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:41:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 21 11:41:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:41:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:41:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 21 11:41:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:24.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:24.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:25 np0005590810 nova_compute[251104]: 2026-01-21 16:41:25.347 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:25] "GET /metrics HTTP/1.1" 200 48676 "" "Prometheus/2.51.0"
Jan 21 11:41:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:25] "GET /metrics HTTP/1.1" 200 48676 "" "Prometheus/2.51.0"
Jan 21 11:41:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:26 np0005590810 nova_compute[251104]: 2026-01-21 16:41:26.321 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 21 11:41:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:26.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:26.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:41:27.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:41:27 np0005590810 podman[270097]: 2026-01-21 16:41:27.295271515 +0000 UTC m=+0.098837216 container exec 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:41:27 np0005590810 podman[270097]: 2026-01-21 16:41:27.393712598 +0000 UTC m=+0.197278269 container exec_died 2bb730cd0dc058122d2a114f184c646349db2c02b9a9288126eea99cf3c65ea8 (image=quay.io/ceph/ceph:v19, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 21 11:41:27 np0005590810 podman[270217]: 2026-01-21 16:41:27.941638739 +0000 UTC m=+0.115470082 container exec 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:41:27 np0005590810 podman[270217]: 2026-01-21 16:41:27.949077319 +0000 UTC m=+0.122908672 container exec_died 7182fb1befc2fb25346a8e5840c132e734e878fc54793d00f5676f9815daf440 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:41:28 np0005590810 podman[270351]: 2026-01-21 16:41:28.5357199 +0000 UTC m=+0.087945548 container exec 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:41:28 np0005590810 podman[270370]: 2026-01-21 16:41:28.621479749 +0000 UTC m=+0.065672987 container exec_died 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:41:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 21 11:41:28 np0005590810 podman[270351]: 2026-01-21 16:41:28.627079553 +0000 UTC m=+0.179305171 container exec_died 62f4c606ff9892782178902cec6656fd383dd0bf06478ef2fff148f7288118e0 (image=quay.io/ceph/haproxy:2.3, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-haproxy-nfs-cephfs-compute-0-fgcddz)
Jan 21 11:41:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:28.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:41:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:28.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:41:28 np0005590810 podman[270414]: 2026-01-21 16:41:28.846039283 +0000 UTC m=+0.053363915 container exec e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.28.2, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 21 11:41:28 np0005590810 podman[270414]: 2026-01-21 16:41:28.881440701 +0000 UTC m=+0.088765303 container exec_died e460bbd40c4128979db4961a6a2fe3680f9475dfdc61c9debebe2ebbe4d9568a (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-keepalived-nfs-cephfs-compute-0-mqubfc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc.)
Jan 21 11:41:29 np0005590810 podman[270479]: 2026-01-21 16:41:29.085901941 +0000 UTC m=+0.059146795 container exec 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:41:29 np0005590810 podman[270479]: 2026-01-21 16:41:29.112647341 +0000 UTC m=+0.085892195 container exec_died 50c8655205428d9eb4ff0638b184dbb97bde97ceb1b8d6fa1486afcf9c09cef3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:41:29 np0005590810 podman[270554]: 2026-01-21 16:41:29.329428353 +0000 UTC m=+0.054671136 container exec 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:41:29 np0005590810 podman[270554]: 2026-01-21 16:41:29.499706253 +0000 UTC m=+0.224949006 container exec_died 915b915b353636f6072df56045c72e24aa0b97f86378396f7575eacf515dce1e (image=quay.io/ceph/grafana:10.4.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 21 11:41:29 np0005590810 podman[270664]: 2026-01-21 16:41:29.911543373 +0000 UTC m=+0.060624450 container exec 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:41:29 np0005590810 podman[270664]: 2026-01-21 16:41:29.948208331 +0000 UTC m=+0.097289368 container exec_died 57833e13bf333028c88e7729b3fd4fb8acb2b6e25856e70a9fd0fb219dd5bef4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d9745984-fea8-5195-8ec5-61f685b5c785-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 21 11:41:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:41:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:30 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:41:30 np0005590810 nova_compute[251104]: 2026-01-21 16:41:30.349 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:30 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:41:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:30.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:30.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:41:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:31 np0005590810 nova_compute[251104]: 2026-01-21 16:41:31.324 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:41:31 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:31 np0005590810 podman[270882]: 2026-01-21 16:41:31.939568372 +0000 UTC m=+0.040475536 container create 422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:41:31 np0005590810 systemd[1]: Started libpod-conmon-422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e.scope.
Jan 21 11:41:32 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:41:32 np0005590810 podman[270882]: 2026-01-21 16:41:31.923150593 +0000 UTC m=+0.024057787 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:41:32 np0005590810 podman[270882]: 2026-01-21 16:41:32.02655574 +0000 UTC m=+0.127462954 container init 422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:41:32 np0005590810 podman[270882]: 2026-01-21 16:41:32.033295928 +0000 UTC m=+0.134203092 container start 422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carver, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:41:32 np0005590810 podman[270882]: 2026-01-21 16:41:32.037692665 +0000 UTC m=+0.138599859 container attach 422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:41:32 np0005590810 elastic_carver[270898]: 167 167
Jan 21 11:41:32 np0005590810 podman[270882]: 2026-01-21 16:41:32.038908742 +0000 UTC m=+0.139815906 container died 422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:41:32 np0005590810 systemd[1]: libpod-422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e.scope: Deactivated successfully.
Jan 21 11:41:32 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7c79b60845029ea27cc7fd478fd7543294214eee6b434db4831024f441e61f5a-merged.mount: Deactivated successfully.
Jan 21 11:41:32 np0005590810 podman[270882]: 2026-01-21 16:41:32.079528632 +0000 UTC m=+0.180435796 container remove 422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:41:32 np0005590810 systemd[1]: libpod-conmon-422d3406c5ef6349bee513a956588d230995b09f59fb86008a6b6b36ab52c84e.scope: Deactivated successfully.
Jan 21 11:41:32 np0005590810 podman[270922]: 2026-01-21 16:41:32.247907544 +0000 UTC m=+0.042941313 container create 9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:41:32 np0005590810 systemd[1]: Started libpod-conmon-9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987.scope.
Jan 21 11:41:32 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:41:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5296ce76c533c73973c7c7ce0cb725a772c44bf7ab5bd7b5ae7df6cafba22a93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5296ce76c533c73973c7c7ce0cb725a772c44bf7ab5bd7b5ae7df6cafba22a93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5296ce76c533c73973c7c7ce0cb725a772c44bf7ab5bd7b5ae7df6cafba22a93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5296ce76c533c73973c7c7ce0cb725a772c44bf7ab5bd7b5ae7df6cafba22a93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:32 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5296ce76c533c73973c7c7ce0cb725a772c44bf7ab5bd7b5ae7df6cafba22a93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:32 np0005590810 podman[270922]: 2026-01-21 16:41:32.228954296 +0000 UTC m=+0.023988085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:41:32 np0005590810 podman[270922]: 2026-01-21 16:41:32.332545689 +0000 UTC m=+0.127579488 container init 9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:41:32 np0005590810 podman[270922]: 2026-01-21 16:41:32.339180594 +0000 UTC m=+0.134214363 container start 9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 11:41:32 np0005590810 podman[270922]: 2026-01-21 16:41:32.344188859 +0000 UTC m=+0.139222628 container attach 9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 11:41:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:41:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:32.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:41:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:32.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:32 np0005590810 great_bassi[270938]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:41:32 np0005590810 great_bassi[270938]: --> All data devices are unavailable
Jan 21 11:41:32 np0005590810 systemd[1]: libpod-9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987.scope: Deactivated successfully.
Jan 21 11:41:32 np0005590810 podman[270922]: 2026-01-21 16:41:32.693283825 +0000 UTC m=+0.488317614 container died 9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 11:41:32 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5296ce76c533c73973c7c7ce0cb725a772c44bf7ab5bd7b5ae7df6cafba22a93-merged.mount: Deactivated successfully.
Jan 21 11:41:32 np0005590810 podman[270922]: 2026-01-21 16:41:32.738450085 +0000 UTC m=+0.533483854 container remove 9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 21 11:41:32 np0005590810 systemd[1]: libpod-conmon-9c67346797de9b109b95fdeee11cb523584e6478e360f5d34dd8b5e4a4527987.scope: Deactivated successfully.
Jan 21 11:41:32 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:32 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:41:32 np0005590810 ceph-mon[74380]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 21 11:41:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Jan 21 11:41:33 np0005590810 podman[271059]: 2026-01-21 16:41:33.346216341 +0000 UTC m=+0.039783984 container create 8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:41:33 np0005590810 systemd[1]: Started libpod-conmon-8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0.scope.
Jan 21 11:41:33 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:41:33 np0005590810 podman[271059]: 2026-01-21 16:41:33.418671679 +0000 UTC m=+0.112239342 container init 8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 11:41:33 np0005590810 podman[271059]: 2026-01-21 16:41:33.329288786 +0000 UTC m=+0.022856459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:41:33 np0005590810 podman[271059]: 2026-01-21 16:41:33.429495594 +0000 UTC m=+0.123063237 container start 8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:41:33 np0005590810 cranky_hermann[271075]: 167 167
Jan 21 11:41:33 np0005590810 systemd[1]: libpod-8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0.scope: Deactivated successfully.
Jan 21 11:41:33 np0005590810 podman[271059]: 2026-01-21 16:41:33.438560225 +0000 UTC m=+0.132127898 container attach 8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 11:41:33 np0005590810 podman[271059]: 2026-01-21 16:41:33.439109812 +0000 UTC m=+0.132677455 container died 8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 21 11:41:33 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c0eb13ac42ef4e682f5263e9e76ae22cba0f3bb6b07b0143cc3477e3b7d07556-merged.mount: Deactivated successfully.
Jan 21 11:41:33 np0005590810 podman[271059]: 2026-01-21 16:41:33.488164083 +0000 UTC m=+0.181731726 container remove 8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 21 11:41:33 np0005590810 systemd[1]: libpod-conmon-8cab571030a3dc4b231cfad9377f4e5165bdeade93ceb1e5a3150cdffebacff0.scope: Deactivated successfully.
Jan 21 11:41:33 np0005590810 podman[271098]: 2026-01-21 16:41:33.710979523 +0000 UTC m=+0.096995759 container create f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Jan 21 11:41:33 np0005590810 podman[271098]: 2026-01-21 16:41:33.640077503 +0000 UTC m=+0.026093759 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:41:33 np0005590810 systemd[1]: Started libpod-conmon-f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96.scope.
Jan 21 11:41:33 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:41:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f690d201ddf40fe27130bdb3bb0a7fcd7a3d54ca209330cc39a2fb6c53a057ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f690d201ddf40fe27130bdb3bb0a7fcd7a3d54ca209330cc39a2fb6c53a057ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f690d201ddf40fe27130bdb3bb0a7fcd7a3d54ca209330cc39a2fb6c53a057ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:33 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f690d201ddf40fe27130bdb3bb0a7fcd7a3d54ca209330cc39a2fb6c53a057ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:34 np0005590810 podman[271098]: 2026-01-21 16:41:34.05499559 +0000 UTC m=+0.441011916 container init f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 21 11:41:34 np0005590810 podman[271098]: 2026-01-21 16:41:34.063859205 +0000 UTC m=+0.449875441 container start f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 21 11:41:34 np0005590810 podman[271098]: 2026-01-21 16:41:34.068072366 +0000 UTC m=+0.454088632 container attach f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]: {
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:    "0": [
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:        {
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "devices": [
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "/dev/loop3"
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            ],
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "lv_name": "ceph_lv0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "lv_size": "21470642176",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "name": "ceph_lv0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "tags": {
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.cluster_name": "ceph",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.crush_device_class": "",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.encrypted": "0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.osd_id": "0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.type": "block",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.vdo": "0",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:                "ceph.with_tpm": "0"
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            },
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "type": "block",
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:            "vg_name": "ceph_vg0"
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:        }
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]:    ]
Jan 21 11:41:34 np0005590810 keen_agnesi[271113]: }
Jan 21 11:41:34 np0005590810 systemd[1]: libpod-f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96.scope: Deactivated successfully.
Jan 21 11:41:34 np0005590810 podman[271098]: 2026-01-21 16:41:34.428614846 +0000 UTC m=+0.814631072 container died f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:41:34 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f690d201ddf40fe27130bdb3bb0a7fcd7a3d54ca209330cc39a2fb6c53a057ab-merged.mount: Deactivated successfully.
Jan 21 11:41:34 np0005590810 podman[271098]: 2026-01-21 16:41:34.47714082 +0000 UTC m=+0.863157056 container remove f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_agnesi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 21 11:41:34 np0005590810 systemd[1]: libpod-conmon-f75c004cc905c3a38d9ddf0290eeeff7857085fe83a40ac36394373897206d96.scope: Deactivated successfully.
Jan 21 11:41:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:34.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:34.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:35 np0005590810 podman[271224]: 2026-01-21 16:41:35.064420522 +0000 UTC m=+0.043645495 container create 7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_pike, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:41:35 np0005590810 systemd[1]: Started libpod-conmon-7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0.scope.
Jan 21 11:41:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:41:35 np0005590810 podman[271224]: 2026-01-21 16:41:35.130886773 +0000 UTC m=+0.110111766 container init 7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:41:35 np0005590810 podman[271224]: 2026-01-21 16:41:35.138420276 +0000 UTC m=+0.117645259 container start 7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_pike, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:41:35 np0005590810 podman[271224]: 2026-01-21 16:41:35.047009832 +0000 UTC m=+0.026234835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:41:35 np0005590810 podman[271224]: 2026-01-21 16:41:35.142831733 +0000 UTC m=+0.122056766 container attach 7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:41:35 np0005590810 awesome_pike[271241]: 167 167
Jan 21 11:41:35 np0005590810 systemd[1]: libpod-7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0.scope: Deactivated successfully.
Jan 21 11:41:35 np0005590810 podman[271224]: 2026-01-21 16:41:35.145126074 +0000 UTC m=+0.124351047 container died 7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_pike, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:41:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Jan 21 11:41:35 np0005590810 systemd[1]: var-lib-containers-storage-overlay-5fcbd7484545cf42b96ccd466dea18e2b72f74e57d2f4439f7abc6c984c33cc3-merged.mount: Deactivated successfully.
Jan 21 11:41:35 np0005590810 podman[271238]: 2026-01-21 16:41:35.18143887 +0000 UTC m=+0.076434301 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 21 11:41:35 np0005590810 podman[271224]: 2026-01-21 16:41:35.193292788 +0000 UTC m=+0.172517761 container remove 7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 21 11:41:35 np0005590810 systemd[1]: libpod-conmon-7759efb4139393a7181296189c4507663882ba912e34fa5d856b502267abbcf0.scope: Deactivated successfully.
Jan 21 11:41:35 np0005590810 nova_compute[251104]: 2026-01-21 16:41:35.352 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:35 np0005590810 podman[271283]: 2026-01-21 16:41:35.372126293 +0000 UTC m=+0.048493535 container create dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kapitsa, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:41:35 np0005590810 systemd[1]: Started libpod-conmon-dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb.scope.
Jan 21 11:41:35 np0005590810 podman[271283]: 2026-01-21 16:41:35.353601068 +0000 UTC m=+0.029968340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:41:35 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:41:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aee086b56f602482a13a034fc1ae6f5ab0e51602f8bfe97e778c6d7357437d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aee086b56f602482a13a034fc1ae6f5ab0e51602f8bfe97e778c6d7357437d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aee086b56f602482a13a034fc1ae6f5ab0e51602f8bfe97e778c6d7357437d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:35 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aee086b56f602482a13a034fc1ae6f5ab0e51602f8bfe97e778c6d7357437d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:41:35 np0005590810 podman[271283]: 2026-01-21 16:41:35.464397585 +0000 UTC m=+0.140764857 container init dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 11:41:35 np0005590810 podman[271283]: 2026-01-21 16:41:35.473120675 +0000 UTC m=+0.149487917 container start dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 11:41:35 np0005590810 podman[271283]: 2026-01-21 16:41:35.478789791 +0000 UTC m=+0.155157033 container attach dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kapitsa, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:41:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:35] "GET /metrics HTTP/1.1" 200 48676 "" "Prometheus/2.51.0"
Jan 21 11:41:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:35] "GET /metrics HTTP/1.1" 200 48676 "" "Prometheus/2.51.0"
Jan 21 11:41:35 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:41:35.943 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:41:35 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:41:35.943 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:41:35 np0005590810 nova_compute[251104]: 2026-01-21 16:41:35.944 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:36 np0005590810 lvm[271374]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:41:36 np0005590810 lvm[271374]: VG ceph_vg0 finished
Jan 21 11:41:36 np0005590810 musing_kapitsa[271297]: {}
Jan 21 11:41:36 np0005590810 systemd[1]: libpod-dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb.scope: Deactivated successfully.
Jan 21 11:41:36 np0005590810 systemd[1]: libpod-dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb.scope: Consumed 1.215s CPU time.
Jan 21 11:41:36 np0005590810 podman[271283]: 2026-01-21 16:41:36.255058982 +0000 UTC m=+0.931426234 container died dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kapitsa, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:41:36 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c9aee086b56f602482a13a034fc1ae6f5ab0e51602f8bfe97e778c6d7357437d-merged.mount: Deactivated successfully.
Jan 21 11:41:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:36 np0005590810 podman[271283]: 2026-01-21 16:41:36.309015515 +0000 UTC m=+0.985382757 container remove dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:41:36 np0005590810 systemd[1]: libpod-conmon-dd7a7345edc8edb9781277252a56768d864e05daeed2a2c5e23f1f5cd74e72eb.scope: Deactivated successfully.
Jan 21 11:41:36 np0005590810 nova_compute[251104]: 2026-01-21 16:41:36.326 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:41:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:41:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:36.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:36.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 16 KiB/s wr, 2 op/s
Jan 21 11:41:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:41:37.206Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:41:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:41:37.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:41:37 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:37 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:37 np0005590810 podman[271417]: 2026-01-21 16:41:37.73806838 +0000 UTC m=+0.106175124 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 21 11:41:38 np0005590810 nova_compute[251104]: 2026-01-21 16:41:38.392 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:38.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:38.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:38 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:41:38.946 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 16 KiB/s wr, 2 op/s
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:41:39
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes', '.mgr', 'default.rgw.log', '.nfs', 'default.rgw.control', '.rgw.root']
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:41:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:41:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007592094606694016 of space, bias 1.0, pg target 0.22776283820082047 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:41:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:41:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:41:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:41:40 np0005590810 nova_compute[251104]: 2026-01-21 16:41:40.354 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:40.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:41 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:41:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 32 op/s
Jan 21 11:41:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:41 np0005590810 nova_compute[251104]: 2026-01-21 16:41:41.329 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:42 np0005590810 nova_compute[251104]: 2026-01-21 16:41:42.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:42.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:42.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:41:43 np0005590810 nova_compute[251104]: 2026-01-21 16:41:43.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:43 np0005590810 nova_compute[251104]: 2026-01-21 16:41:43.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:41:43 np0005590810 nova_compute[251104]: 2026-01-21 16:41:43.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:41:43 np0005590810 nova_compute[251104]: 2026-01-21 16:41:43.403 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:41:43 np0005590810 nova_compute[251104]: 2026-01-21 16:41:43.404 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:44 np0005590810 nova_compute[251104]: 2026-01-21 16:41:44.367 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:41:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:44.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:41:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:44.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:41:45 np0005590810 nova_compute[251104]: 2026-01-21 16:41:45.357 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:45 np0005590810 nova_compute[251104]: 2026-01-21 16:41:45.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:45] "GET /metrics HTTP/1.1" 200 48678 "" "Prometheus/2.51.0"
Jan 21 11:41:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:45] "GET /metrics HTTP/1.1" 200 48678 "" "Prometheus/2.51.0"
Jan 21 11:41:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:46 np0005590810 nova_compute[251104]: 2026-01-21 16:41:46.331 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:46 np0005590810 nova_compute[251104]: 2026-01-21 16:41:46.363 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:46.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:46.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 21 11:41:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:41:47.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.393 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.394 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.394 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.394 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.394 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:41:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.002000063s ======
Jan 21 11:41:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:48.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 21 11:41:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:48.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:41:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/900516595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:41:48 np0005590810 nova_compute[251104]: 2026-01-21 16:41:48.908 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.060 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.061 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4520MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.062 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.062 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.130 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.130 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:41:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.180 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:41:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:41:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789793848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.696 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.702 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.737 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.740 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:41:49 np0005590810 nova_compute[251104]: 2026-01-21 16:41:49.740 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:41:50 np0005590810 nova_compute[251104]: 2026-01-21 16:41:50.358 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:50.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:41:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:50.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:41:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 21 11:41:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:51 np0005590810 nova_compute[251104]: 2026-01-21 16:41:51.334 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:51 np0005590810 nova_compute[251104]: 2026-01-21 16:41:51.741 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:41:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:52.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19171c25d0 =====
Jan 21 11:41:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19171c25d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:52 np0005590810 radosgw[94128]: beast: 0x7f19171c25d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:52.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:41:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:41:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:41:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:54.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:54.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:41:55 np0005590810 nova_compute[251104]: 2026-01-21 16:41:55.360 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:55] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:41:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:41:55] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:41:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:41:56 np0005590810 nova_compute[251104]: 2026-01-21 16:41:56.336 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:41:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:56.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:56.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:41:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:41:57.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:41:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:41:58.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:41:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:41:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:41:58.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:41:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:42:00 np0005590810 nova_compute[251104]: 2026-01-21 16:42:00.363 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:42:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:00.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:42:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:00.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:42:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:01 np0005590810 nova_compute[251104]: 2026-01-21 16:42:01.338 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:02 np0005590810 nova_compute[251104]: 2026-01-21 16:42:02.577 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:02 np0005590810 nova_compute[251104]: 2026-01-21 16:42:02.577 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:02 np0005590810 nova_compute[251104]: 2026-01-21 16:42:02.597 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 21 11:42:02 np0005590810 nova_compute[251104]: 2026-01-21 16:42:02.678 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:02 np0005590810 nova_compute[251104]: 2026-01-21 16:42:02.678 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:02 np0005590810 nova_compute[251104]: 2026-01-21 16:42:02.686 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 21 11:42:02 np0005590810 nova_compute[251104]: 2026-01-21 16:42:02.686 251108 INFO nova.compute.claims [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 21 11:42:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:02.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:42:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:02.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:42:02 np0005590810 nova_compute[251104]: 2026-01-21 16:42:02.811 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:42:03 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:42:03 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/845452930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.288 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.295 251108 DEBUG nova.compute.provider_tree [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.412 251108 DEBUG nova.scheduler.client.report [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.440 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.441 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.524 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.525 251108 DEBUG nova.network.neutron [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.550 251108 INFO nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.576 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.751 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.753 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.754 251108 INFO nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Creating image(s)#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.789 251108 DEBUG nova.storage.rbd_utils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.827 251108 DEBUG nova.storage.rbd_utils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.860 251108 DEBUG nova.storage.rbd_utils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.865 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.931 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.932 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.933 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.933 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "2feac22a67fc835e7393e231263ebe1fb23c2b92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.963 251108 DEBUG nova.storage.rbd_utils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.967 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:03 np0005590810 nova_compute[251104]: 2026-01-21 16:42:03.991 251108 DEBUG nova.policy [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '918cf3fb78394ce8b3ade91a1ad699fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d6214185b004f9c9798abfc29d1ae14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 21 11:42:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:04.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:42:05 np0005590810 nova_compute[251104]: 2026-01-21 16:42:05.271 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2feac22a67fc835e7393e231263ebe1fb23c2b92 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:05 np0005590810 nova_compute[251104]: 2026-01-21 16:42:05.343 251108 DEBUG nova.storage.rbd_utils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] resizing rbd image 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 21 11:42:05 np0005590810 nova_compute[251104]: 2026-01-21 16:42:05.491 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:05] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:42:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:05] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:42:05 np0005590810 podman[271735]: 2026-01-21 16:42:05.678648179 +0000 UTC m=+0.056638087 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 11:42:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:06 np0005590810 nova_compute[251104]: 2026-01-21 16:42:06.325 251108 DEBUG nova.objects.instance [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e84b1a2-5047-4d10-a2f2-f18fb832420f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:42:06 np0005590810 nova_compute[251104]: 2026-01-21 16:42:06.340 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 21 11:42:06 np0005590810 nova_compute[251104]: 2026-01-21 16:42:06.340 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Ensure instance console log exists: /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 21 11:42:06 np0005590810 nova_compute[251104]: 2026-01-21 16:42:06.341 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:06 np0005590810 nova_compute[251104]: 2026-01-21 16:42:06.341 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:06 np0005590810 nova_compute[251104]: 2026-01-21 16:42:06.341 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:06 np0005590810 nova_compute[251104]: 2026-01-21 16:42:06.342 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:42:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:06.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:42:06 np0005590810 nova_compute[251104]: 2026-01-21 16:42:06.771 251108 DEBUG nova.network.neutron [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Successfully created port: b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 21 11:42:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.8 MiB/s wr, 18 op/s
Jan 21 11:42:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:07.210Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:42:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:07.210Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:42:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:07.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:42:08 np0005590810 nova_compute[251104]: 2026-01-21 16:42:08.226 251108 DEBUG nova.network.neutron [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Successfully updated port: b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 21 11:42:08 np0005590810 nova_compute[251104]: 2026-01-21 16:42:08.242 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:42:08 np0005590810 nova_compute[251104]: 2026-01-21 16:42:08.243 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquired lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:42:08 np0005590810 nova_compute[251104]: 2026-01-21 16:42:08.243 251108 DEBUG nova.network.neutron [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 21 11:42:08 np0005590810 nova_compute[251104]: 2026-01-21 16:42:08.383 251108 DEBUG nova.compute.manager [req-985e06a1-49cf-4f8d-929a-32ad510098e5 req-95e28763-6d04-448e-93fe-19dc265c1cef 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:42:08 np0005590810 nova_compute[251104]: 2026-01-21 16:42:08.384 251108 DEBUG nova.compute.manager [req-985e06a1-49cf-4f8d-929a-32ad510098e5 req-95e28763-6d04-448e-93fe-19dc265c1cef 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing instance network info cache due to event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 21 11:42:08 np0005590810 nova_compute[251104]: 2026-01-21 16:42:08.384 251108 DEBUG oslo_concurrency.lockutils [req-985e06a1-49cf-4f8d-929a-32ad510098e5 req-95e28763-6d04-448e-93fe-19dc265c1cef 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:42:08 np0005590810 nova_compute[251104]: 2026-01-21 16:42:08.443 251108 DEBUG nova.network.neutron [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 21 11:42:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:08.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:08 np0005590810 podman[271776]: 2026-01-21 16:42:08.729880744 +0000 UTC m=+0.110447465 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 11:42:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Jan 21 11:42:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:42:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:42:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:42:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.548 251108 DEBUG nova.network.neutron [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.564 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Releasing lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.565 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Instance network_info: |[{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.565 251108 DEBUG oslo_concurrency.lockutils [req-985e06a1-49cf-4f8d-929a-32ad510098e5 req-95e28763-6d04-448e-93fe-19dc265c1cef 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquired lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.565 251108 DEBUG nova.network.neutron [req-985e06a1-49cf-4f8d-929a-32ad510098e5 req-95e28763-6d04-448e-93fe-19dc265c1cef 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.568 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Start _get_guest_xml network_info=[{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-21T16:29:46Z,direct_url=<?>,disk_format='qcow2',id=437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ad455439fcc6470fa721af543ff96c56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-21T16:29:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'guest_format': None, 'size': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_format': None, 'image_id': '437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.573 251108 WARNING nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.578 251108 DEBUG nova.virt.libvirt.host [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.578 251108 DEBUG nova.virt.libvirt.host [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.583 251108 DEBUG nova.virt.libvirt.host [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.583 251108 DEBUG nova.virt.libvirt.host [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.584 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.584 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-21T16:29:45Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1e6b96db-db66-4485-bb89-2da0df7b45b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-21T16:29:46Z,direct_url=<?>,disk_format='qcow2',id=437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ad455439fcc6470fa721af543ff96c56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-21T16:29:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.584 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.584 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.585 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.585 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.585 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.585 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.585 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.586 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.586 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.586 251108 DEBUG nova.virt.hardware [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 21 11:42:09 np0005590810 nova_compute[251104]: 2026-01-21 16:42:09.589 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:42:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:42:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:42:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:42:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 11:42:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655525215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.062 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.088 251108 DEBUG nova.storage.rbd_utils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.093 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.366 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:10 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 11:42:10 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318637055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.669 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.671 251108 DEBUG nova.virt.libvirt.vif [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-21T16:42:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-169262401',display_name='tempest-TestNetworkBasicOps-server-169262401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-169262401',id=11,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPqo8ijgDC/VmjoMkIS4OXl3nZQslio/6ZpG6oLieA37YDqhmdueG99K42pXQUKYcd0SudRZ7X6453WpvXEnc80w5WhZFjagGA5Xif2xoVOlTnllyftwuZ5Cg/7ZgrbdWg==',key_name='tempest-TestNetworkBasicOps-1889562769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-2wbk007n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-21T16:42:03Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=7e84b1a2-5047-4d10-a2f2-f18fb832420f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.672 251108 DEBUG nova.network.os_vif_util [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.673 251108 DEBUG nova.network.os_vif_util [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:13:b6,bridge_name='br-int',has_traffic_filtering=True,id=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7,network=Network(02c85004-4705-4aed-8c2b-9592f54dd920),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff0b81-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.675 251108 DEBUG nova.objects.instance [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e84b1a2-5047-4d10-a2f2-f18fb832420f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.693 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] End _get_guest_xml xml=<domain type="kvm">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <uuid>7e84b1a2-5047-4d10-a2f2-f18fb832420f</uuid>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <name>instance-0000000b</name>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <memory>131072</memory>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <vcpu>1</vcpu>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <metadata>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <nova:name>tempest-TestNetworkBasicOps-server-169262401</nova:name>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <nova:creationTime>2026-01-21 16:42:09</nova:creationTime>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <nova:flavor name="m1.nano">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <nova:memory>128</nova:memory>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <nova:disk>1</nova:disk>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <nova:swap>0</nova:swap>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <nova:ephemeral>0</nova:ephemeral>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <nova:vcpus>1</nova:vcpus>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      </nova:flavor>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <nova:owner>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <nova:user uuid="918cf3fb78394ce8b3ade91a1ad699fc">tempest-TestNetworkBasicOps-1793517209-project-member</nova:user>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <nova:project uuid="3d6214185b004f9c9798abfc29d1ae14">tempest-TestNetworkBasicOps-1793517209</nova:project>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      </nova:owner>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <nova:root type="image" uuid="437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <nova:ports>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <nova:port uuid="b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        </nova:port>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      </nova:ports>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </nova:instance>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  </metadata>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <sysinfo type="smbios">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <system>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <entry name="manufacturer">RDO</entry>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <entry name="product">OpenStack Compute</entry>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <entry name="serial">7e84b1a2-5047-4d10-a2f2-f18fb832420f</entry>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <entry name="uuid">7e84b1a2-5047-4d10-a2f2-f18fb832420f</entry>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <entry name="family">Virtual Machine</entry>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </system>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  </sysinfo>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <os>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <boot dev="hd"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <smbios mode="sysinfo"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  </os>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <features>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <acpi/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <apic/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <vmcoreinfo/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  </features>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <clock offset="utc">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <timer name="pit" tickpolicy="delay"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <timer name="hpet" present="no"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  </clock>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <cpu mode="host-model" match="exact">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <topology sockets="1" cores="1" threads="1"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  </cpu>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  <devices>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <disk type="network" device="disk">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <driver type="raw" cache="none"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <source protocol="rbd" name="vms/7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <host name="192.168.122.100" port="6789"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <host name="192.168.122.102" port="6789"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <host name="192.168.122.101" port="6789"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      </source>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <auth username="openstack">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <secret type="ceph" uuid="d9745984-fea8-5195-8ec5-61f685b5c785"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      </auth>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <target dev="vda" bus="virtio"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <disk type="network" device="cdrom">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <driver type="raw" cache="none"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <source protocol="rbd" name="vms/7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk.config">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <host name="192.168.122.100" port="6789"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <host name="192.168.122.102" port="6789"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <host name="192.168.122.101" port="6789"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      </source>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <auth username="openstack">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:        <secret type="ceph" uuid="d9745984-fea8-5195-8ec5-61f685b5c785"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      </auth>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <target dev="sda" bus="sata"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </disk>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <interface type="ethernet">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <mac address="fa:16:3e:57:13:b6"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <model type="virtio"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <driver name="vhost" rx_queue_size="512"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <mtu size="1442"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <target dev="tapb3ff0b81-0a"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </interface>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <serial type="pty">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <log file="/var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f/console.log" append="off"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </serial>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <video>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <model type="virtio"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </video>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <input type="tablet" bus="usb"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <rng model="virtio">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <backend model="random">/dev/urandom</backend>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </rng>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="pci" model="pcie-root-port"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <controller type="usb" index="0"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    <memballoon model="virtio">
Jan 21 11:42:10 np0005590810 nova_compute[251104]:      <stats period="10"/>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:    </memballoon>
Jan 21 11:42:10 np0005590810 nova_compute[251104]:  </devices>
Jan 21 11:42:10 np0005590810 nova_compute[251104]: </domain>
Jan 21 11:42:10 np0005590810 nova_compute[251104]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.694 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Preparing to wait for external event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.694 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.695 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.695 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.696 251108 DEBUG nova.virt.libvirt.vif [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-21T16:42:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-169262401',display_name='tempest-TestNetworkBasicOps-server-169262401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-169262401',id=11,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPqo8ijgDC/VmjoMkIS4OXl3nZQslio/6ZpG6oLieA37YDqhmdueG99K42pXQUKYcd0SudRZ7X6453WpvXEnc80w5WhZFjagGA5Xif2xoVOlTnllyftwuZ5Cg/7ZgrbdWg==',key_name='tempest-TestNetworkBasicOps-1889562769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-2wbk007n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-21T16:42:03Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=7e84b1a2-5047-4d10-a2f2-f18fb832420f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.696 251108 DEBUG nova.network.os_vif_util [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.696 251108 DEBUG nova.network.os_vif_util [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:13:b6,bridge_name='br-int',has_traffic_filtering=True,id=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7,network=Network(02c85004-4705-4aed-8c2b-9592f54dd920),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff0b81-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:42:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:42:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.697 251108 DEBUG os_vif [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:13:b6,bridge_name='br-int',has_traffic_filtering=True,id=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7,network=Network(02c85004-4705-4aed-8c2b-9592f54dd920),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff0b81-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.698 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.698 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.698 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.700 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.701 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3ff0b81-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.701 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3ff0b81-0a, col_values=(('external_ids', {'iface-id': 'b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:13:b6', 'vm-uuid': '7e84b1a2-5047-4d10-a2f2-f18fb832420f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.702 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:10 np0005590810 NetworkManager[48894]: <info>  [1769013730.7041] manager: (tapb3ff0b81-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 21 11:42:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:10.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.706 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.711 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:10 np0005590810 nova_compute[251104]: 2026-01-21 16:42:10.712 251108 INFO os_vif [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:13:b6,bridge_name='br-int',has_traffic_filtering=True,id=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7,network=Network(02c85004-4705-4aed-8c2b-9592f54dd920),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff0b81-0a')#033[00m
Jan 21 11:42:11 np0005590810 nova_compute[251104]: 2026-01-21 16:42:11.006 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 21 11:42:11 np0005590810 nova_compute[251104]: 2026-01-21 16:42:11.006 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 21 11:42:11 np0005590810 nova_compute[251104]: 2026-01-21 16:42:11.007 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] No VIF found with MAC fa:16:3e:57:13:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 21 11:42:11 np0005590810 nova_compute[251104]: 2026-01-21 16:42:11.007 251108 INFO nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Using config drive#033[00m
Jan 21 11:42:11 np0005590810 nova_compute[251104]: 2026-01-21 16:42:11.037 251108 DEBUG nova.storage.rbd_utils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:42:11 np0005590810 nova_compute[251104]: 2026-01-21 16:42:11.061 251108 DEBUG nova.network.neutron [req-985e06a1-49cf-4f8d-929a-32ad510098e5 req-95e28763-6d04-448e-93fe-19dc265c1cef 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updated VIF entry in instance network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 21 11:42:11 np0005590810 nova_compute[251104]: 2026-01-21 16:42:11.062 251108 DEBUG nova.network.neutron [req-985e06a1-49cf-4f8d-929a-32ad510098e5 req-95e28763-6d04-448e-93fe-19dc265c1cef 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:42:11 np0005590810 nova_compute[251104]: 2026-01-21 16:42:11.073 251108 DEBUG oslo_concurrency.lockutils [req-985e06a1-49cf-4f8d-929a-32ad510098e5 req-95e28763-6d04-448e-93fe-19dc265c1cef 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Releasing lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:42:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:42:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:12 np0005590810 nova_compute[251104]: 2026-01-21 16:42:12.047 251108 INFO nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Creating config drive at /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f/disk.config#033[00m
Jan 21 11:42:12 np0005590810 nova_compute[251104]: 2026-01-21 16:42:12.052 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa37p6oef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:12 np0005590810 nova_compute[251104]: 2026-01-21 16:42:12.180 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa37p6oef" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:12 np0005590810 nova_compute[251104]: 2026-01-21 16:42:12.214 251108 DEBUG nova.storage.rbd_utils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] rbd image 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 21 11:42:12 np0005590810 nova_compute[251104]: 2026-01-21 16:42:12.218 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f/disk.config 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:12.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:12.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:42:13 np0005590810 nova_compute[251104]: 2026-01-21 16:42:13.824 251108 DEBUG oslo_concurrency.processutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f/disk.config 7e84b1a2-5047-4d10-a2f2-f18fb832420f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:13 np0005590810 nova_compute[251104]: 2026-01-21 16:42:13.825 251108 INFO nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Deleting local config drive /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f/disk.config because it was imported into RBD.#033[00m
Jan 21 11:42:13 np0005590810 systemd[1]: Starting libvirt secret daemon...
Jan 21 11:42:13 np0005590810 systemd[1]: Started libvirt secret daemon.
Jan 21 11:42:13 np0005590810 kernel: tapb3ff0b81-0a: entered promiscuous mode
Jan 21 11:42:13 np0005590810 NetworkManager[48894]: <info>  [1769013733.9244] manager: (tapb3ff0b81-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Jan 21 11:42:13 np0005590810 nova_compute[251104]: 2026-01-21 16:42:13.924 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:13 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:13Z|00077|binding|INFO|Claiming lport b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 for this chassis.
Jan 21 11:42:13 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:13Z|00078|binding|INFO|b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7: Claiming fa:16:3e:57:13:b6 10.100.0.5
Jan 21 11:42:13 np0005590810 nova_compute[251104]: 2026-01-21 16:42:13.929 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.944 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:13:b6 10.100.0.5'], port_security=['fa:16:3e:57:13:b6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7e84b1a2-5047-4d10-a2f2-f18fb832420f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02c85004-4705-4aed-8c2b-9592f54dd920', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d6214185b004f9c9798abfc29d1ae14', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2533930-eed5-4ea0-a1b7-bfc1d86b4b1f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53c5efc4-3c1a-4340-9a3f-2f6f0aff9289, chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], logical_port=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.945 163593 INFO neutron.agent.ovn.metadata.agent [-] Port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 in datapath 02c85004-4705-4aed-8c2b-9592f54dd920 bound to our chassis#033[00m
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.946 163593 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02c85004-4705-4aed-8c2b-9592f54dd920#033[00m
Jan 21 11:42:13 np0005590810 systemd-udevd[271961]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.961 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[50683e32-ac22-4801-8e6d-ef00fd305691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.962 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02c85004-41 in ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.964 260432 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02c85004-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.964 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8668a0-4fbd-4870-967c-36f9ed7ba10c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:13 np0005590810 systemd-machined[217254]: New machine qemu-4-instance-0000000b.
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.965 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[90218c4d-dbee-4a4a-848a-2b55e3cac8aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:13 np0005590810 NetworkManager[48894]: <info>  [1769013733.9765] device (tapb3ff0b81-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 11:42:13 np0005590810 NetworkManager[48894]: <info>  [1769013733.9777] device (tapb3ff0b81-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 21 11:42:13 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:13.987 163844 DEBUG oslo.privsep.daemon [-] privsep: reply[13f4ed4b-4a4b-486f-b5d9-91ed98a9f253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:13 np0005590810 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Jan 21 11:42:13 np0005590810 nova_compute[251104]: 2026-01-21 16:42:13.996 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:14 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:14Z|00079|binding|INFO|Setting lport b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 ovn-installed in OVS
Jan 21 11:42:14 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:14Z|00080|binding|INFO|Setting lport b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 up in Southbound
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.004 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.003 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[88b86a73-2b29-4f80-871e-33c065ac7ab4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.037 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[035c515e-e281-43e6-bec4-69dc00771f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.043 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[396f3ed6-6bd7-4464-9d60-f7b7f6ef742e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 NetworkManager[48894]: <info>  [1769013734.0442] manager: (tap02c85004-40): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.073 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[ebed942a-972e-46f9-9a64-d358b08efca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.077 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ad1f68-5593-4497-bc10-2deaa2c272ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 NetworkManager[48894]: <info>  [1769013734.1031] device (tap02c85004-40): carrier: link connected
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.106 260499 DEBUG oslo.privsep.daemon [-] privsep: reply[941c6a3b-dc74-43b7-b994-c23d68f606ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.127 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[5340a501-a76a-47af-aa09-0b0cd093dc82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02c85004-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:75:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495077, 'reachable_time': 21141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271995, 'error': None, 'target': 'ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.145 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[92683581-1f68-4653-9838-0e3e39e53223]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:757a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495077, 'tstamp': 495077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271996, 'error': None, 'target': 'ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.165 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[432ccaed-913c-4d7a-91fd-b25a17492fd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02c85004-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:75:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495077, 'reachable_time': 21141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271997, 'error': None, 'target': 'ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.202 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[fbea63da-c972-4aab-ba76-b57931d18118]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.266 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[02a9d008-b4ea-483b-a1cd-29bf74e9dde4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.267 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02c85004-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.268 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.268 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02c85004-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:42:14 np0005590810 NetworkManager[48894]: <info>  [1769013734.2704] manager: (tap02c85004-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 21 11:42:14 np0005590810 kernel: tap02c85004-40: entered promiscuous mode
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.270 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.275 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02c85004-40, col_values=(('external_ids', {'iface-id': 'ca0c2386-db01-4ad5-b2b5-6816617018c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.276 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:14 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:14Z|00081|binding|INFO|Releasing lport ca0c2386-db01-4ad5-b2b5-6816617018c1 from this chassis (sb_readonly=0)
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.279 163593 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02c85004-4705-4aed-8c2b-9592f54dd920.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02c85004-4705-4aed-8c2b-9592f54dd920.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.280 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4d87f3-c563-4408-8fad-d4df3772e9b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.281 163593 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: global
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    log         /dev/log local0 debug
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    log-tag     haproxy-metadata-proxy-02c85004-4705-4aed-8c2b-9592f54dd920
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    user        root
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    group       root
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    maxconn     1024
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    pidfile     /var/lib/neutron/external/pids/02c85004-4705-4aed-8c2b-9592f54dd920.pid.haproxy
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    daemon
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: defaults
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    log global
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    mode http
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    option httplog
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    option dontlognull
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    option http-server-close
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    option forwardfor
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    retries                 3
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    timeout http-request    30s
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    timeout connect         30s
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    timeout client          32s
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    timeout server          32s
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    timeout http-keep-alive 30s
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: listen listener
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    bind 169.254.169.254:80
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    server metadata /var/lib/neutron/metadata_proxy
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]:    http-request add-header X-OVN-Network-ID 02c85004-4705-4aed-8c2b-9592f54dd920
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 21 11:42:14 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:14.282 163593 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920', 'env', 'PROCESS_TAG=haproxy-02c85004-4705-4aed-8c2b-9592f54dd920', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02c85004-4705-4aed-8c2b-9592f54dd920.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.290 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:14.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:42:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:14.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:42:14 np0005590810 podman[272067]: 2026-01-21 16:42:14.640783317 +0000 UTC m=+0.027683379 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.759 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013734.7595255, 7e84b1a2-5047-4d10-a2f2-f18fb832420f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.760 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] VM Started (Lifecycle Event)#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.779 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.782 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013734.760478, 7e84b1a2-5047-4d10-a2f2-f18fb832420f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.783 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] VM Paused (Lifecycle Event)#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.799 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.802 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 21 11:42:14 np0005590810 nova_compute[251104]: 2026-01-21 16:42:14.818 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 21 11:42:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.187 251108 DEBUG nova.compute.manager [req-6dfaa0c4-514b-473c-8d42-fe8b65a7bee1 req-3438be64-2922-4572-a638-ff52e5b28b00 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.188 251108 DEBUG oslo_concurrency.lockutils [req-6dfaa0c4-514b-473c-8d42-fe8b65a7bee1 req-3438be64-2922-4572-a638-ff52e5b28b00 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.189 251108 DEBUG oslo_concurrency.lockutils [req-6dfaa0c4-514b-473c-8d42-fe8b65a7bee1 req-3438be64-2922-4572-a638-ff52e5b28b00 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.190 251108 DEBUG oslo_concurrency.lockutils [req-6dfaa0c4-514b-473c-8d42-fe8b65a7bee1 req-3438be64-2922-4572-a638-ff52e5b28b00 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.190 251108 DEBUG nova.compute.manager [req-6dfaa0c4-514b-473c-8d42-fe8b65a7bee1 req-3438be64-2922-4572-a638-ff52e5b28b00 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Processing event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.191 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.195 251108 DEBUG nova.virt.driver [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] Emitting event <LifecycleEvent: 1769013735.1950006, 7e84b1a2-5047-4d10-a2f2-f18fb832420f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.195 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] VM Resumed (Lifecycle Event)#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.198 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.202 251108 INFO nova.virt.libvirt.driver [-] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Instance spawned successfully.#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.203 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.219 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.227 251108 DEBUG nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.231 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.232 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.232 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.233 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.234 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.234 251108 DEBUG nova.virt.libvirt.driver [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 21 11:42:15 np0005590810 podman[272067]: 2026-01-21 16:42:15.250893256 +0000 UTC m=+0.637793298 container create 0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.260 251108 INFO nova.compute.manager [None req-02967f4d-32e4-47fb-ba94-b2137ea27ef7 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.292 251108 INFO nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Took 11.54 seconds to spawn the instance on the hypervisor.#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.293 251108 DEBUG nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.357 251108 INFO nova.compute.manager [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Took 12.71 seconds to build instance.#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.368 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.372 251108 DEBUG oslo_concurrency.lockutils [None req-e5c115ba-a13c-4a6e-afa1-41f295341670 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:15 np0005590810 systemd[1]: Started libpod-conmon-0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff.scope.
Jan 21 11:42:15 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:42:15 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b0c78822b27ce57ab420da7e97560d332883cfeded8c9724e998ad2939e3ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:15 np0005590810 podman[272067]: 2026-01-21 16:42:15.548293939 +0000 UTC m=+0.935194031 container init 0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:42:15 np0005590810 podman[272067]: 2026-01-21 16:42:15.553921613 +0000 UTC m=+0.940821655 container start 0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:42:15 np0005590810 neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920[272090]: [NOTICE]   (272094) : New worker (272096) forked
Jan 21 11:42:15 np0005590810 neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920[272090]: [NOTICE]   (272094) : Loading success.
Jan 21 11:42:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:15] "GET /metrics HTTP/1.1" 200 48682 "" "Prometheus/2.51.0"
Jan 21 11:42:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:15] "GET /metrics HTTP/1.1" 200 48682 "" "Prometheus/2.51.0"
Jan 21 11:42:15 np0005590810 nova_compute[251104]: 2026-01-21 16:42:15.703 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:16.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:16.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 21 11:42:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:17.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:42:17 np0005590810 nova_compute[251104]: 2026-01-21 16:42:17.272 251108 DEBUG nova.compute.manager [req-4fa13d76-ecc8-4bde-b14c-7a2399b54e8f req-562ea09b-4bc1-4b42-9fe9-90162037f5b2 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:42:17 np0005590810 nova_compute[251104]: 2026-01-21 16:42:17.273 251108 DEBUG oslo_concurrency.lockutils [req-4fa13d76-ecc8-4bde-b14c-7a2399b54e8f req-562ea09b-4bc1-4b42-9fe9-90162037f5b2 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:17 np0005590810 nova_compute[251104]: 2026-01-21 16:42:17.273 251108 DEBUG oslo_concurrency.lockutils [req-4fa13d76-ecc8-4bde-b14c-7a2399b54e8f req-562ea09b-4bc1-4b42-9fe9-90162037f5b2 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:17 np0005590810 nova_compute[251104]: 2026-01-21 16:42:17.273 251108 DEBUG oslo_concurrency.lockutils [req-4fa13d76-ecc8-4bde-b14c-7a2399b54e8f req-562ea09b-4bc1-4b42-9fe9-90162037f5b2 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:17 np0005590810 nova_compute[251104]: 2026-01-21 16:42:17.274 251108 DEBUG nova.compute.manager [req-4fa13d76-ecc8-4bde-b14c-7a2399b54e8f req-562ea09b-4bc1-4b42-9fe9-90162037f5b2 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] No waiting events found dispatching network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:42:17 np0005590810 nova_compute[251104]: 2026-01-21 16:42:17.274 251108 WARNING nova.compute.manager [req-4fa13d76-ecc8-4bde-b14c-7a2399b54e8f req-562ea09b-4bc1-4b42-9fe9-90162037f5b2 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received unexpected event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 for instance with vm_state active and task_state None.#033[00m
Jan 21 11:42:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:18.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:18.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 13 KiB/s wr, 21 op/s
Jan 21 11:42:19 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:19Z|00082|binding|INFO|Releasing lport ca0c2386-db01-4ad5-b2b5-6816617018c1 from this chassis (sb_readonly=0)
Jan 21 11:42:19 np0005590810 NetworkManager[48894]: <info>  [1769013739.4267] manager: (patch-br-int-to-provnet-b53c687f-ce80-4374-bb32-b17e6ca8f621): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 21 11:42:19 np0005590810 NetworkManager[48894]: <info>  [1769013739.4276] manager: (patch-provnet-b53c687f-ce80-4374-bb32-b17e6ca8f621-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 21 11:42:19 np0005590810 nova_compute[251104]: 2026-01-21 16:42:19.425 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:19 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:19Z|00083|binding|INFO|Releasing lport ca0c2386-db01-4ad5-b2b5-6816617018c1 from this chassis (sb_readonly=0)
Jan 21 11:42:19 np0005590810 nova_compute[251104]: 2026-01-21 16:42:19.463 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:19 np0005590810 nova_compute[251104]: 2026-01-21 16:42:19.469 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:20 np0005590810 nova_compute[251104]: 2026-01-21 16:42:20.328 251108 DEBUG nova.compute.manager [req-128f3348-2c78-41c0-bb76-d921438922e5 req-22ac83df-1ba1-40a5-951e-863450e4f08a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:42:20 np0005590810 nova_compute[251104]: 2026-01-21 16:42:20.328 251108 DEBUG nova.compute.manager [req-128f3348-2c78-41c0-bb76-d921438922e5 req-22ac83df-1ba1-40a5-951e-863450e4f08a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing instance network info cache due to event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 21 11:42:20 np0005590810 nova_compute[251104]: 2026-01-21 16:42:20.328 251108 DEBUG oslo_concurrency.lockutils [req-128f3348-2c78-41c0-bb76-d921438922e5 req-22ac83df-1ba1-40a5-951e-863450e4f08a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:42:20 np0005590810 nova_compute[251104]: 2026-01-21 16:42:20.329 251108 DEBUG oslo_concurrency.lockutils [req-128f3348-2c78-41c0-bb76-d921438922e5 req-22ac83df-1ba1-40a5-951e-863450e4f08a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquired lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:42:20 np0005590810 nova_compute[251104]: 2026-01-21 16:42:20.329 251108 DEBUG nova.network.neutron [req-128f3348-2c78-41c0-bb76-d921438922e5 req-22ac83df-1ba1-40a5-951e-863450e4f08a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 21 11:42:20 np0005590810 nova_compute[251104]: 2026-01-21 16:42:20.371 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:20 np0005590810 nova_compute[251104]: 2026-01-21 16:42:20.706 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:20.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:20.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 85 op/s
Jan 21 11:42:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:21 np0005590810 nova_compute[251104]: 2026-01-21 16:42:21.512 251108 DEBUG nova.network.neutron [req-128f3348-2c78-41c0-bb76-d921438922e5 req-22ac83df-1ba1-40a5-951e-863450e4f08a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updated VIF entry in instance network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 21 11:42:21 np0005590810 nova_compute[251104]: 2026-01-21 16:42:21.512 251108 DEBUG nova.network.neutron [req-128f3348-2c78-41c0-bb76-d921438922e5 req-22ac83df-1ba1-40a5-951e-863450e4f08a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:42:21 np0005590810 nova_compute[251104]: 2026-01-21 16:42:21.531 251108 DEBUG oslo_concurrency.lockutils [req-128f3348-2c78-41c0-bb76-d921438922e5 req-22ac83df-1ba1-40a5-951e-863450e4f08a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Releasing lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:42:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:22.033 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:22.034 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:22.035 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:22.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:22.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 21 11:42:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:42:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:42:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:24.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:24.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 21 11:42:25 np0005590810 nova_compute[251104]: 2026-01-21 16:42:25.373 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:25] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:42:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:25] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:42:25 np0005590810 nova_compute[251104]: 2026-01-21 16:42:25.707 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:42:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:26.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:42:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:26.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 21 11:42:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:27.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:42:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:28.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:28.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:28 np0005590810 ceph-mgr[74671]: [dashboard INFO request] [192.168.122.100:47426] [POST] [200] [0.002s] [4.0B] [b94e5070-8409-4c93-a4f6-8ee8e6f4e3e9] /api/prometheus_receiver
Jan 21 11:42:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 21 11:42:30 np0005590810 nova_compute[251104]: 2026-01-21 16:42:30.376 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:30 np0005590810 nova_compute[251104]: 2026-01-21 16:42:30.709 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:30.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:30.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:31 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:31Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:13:b6 10.100.0.5
Jan 21 11:42:31 np0005590810 ovn_controller[152632]: 2026-01-21T16:42:31Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:13:b6 10.100.0.5
Jan 21 11:42:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 155 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 135 op/s
Jan 21 11:42:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:32.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:32.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 155 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 239 KiB/s rd, 3.8 MiB/s wr, 71 op/s
Jan 21 11:42:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:34.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 155 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 239 KiB/s rd, 3.8 MiB/s wr, 71 op/s
Jan 21 11:42:35 np0005590810 nova_compute[251104]: 2026-01-21 16:42:35.378 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:35] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:42:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:35] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:42:35 np0005590810 nova_compute[251104]: 2026-01-21 16:42:35.711 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:36 np0005590810 podman[272152]: 2026-01-21 16:42:36.688435329 +0000 UTC m=+0.063571663 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 11:42:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:36.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:36.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Jan 21 11:42:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:37.212Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:42:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:37.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:42:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 4.5 MiB/s wr, 115 op/s
Jan 21 11:42:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 479 KiB/s rd, 5.5 MiB/s wr, 143 op/s
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:42:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:42:38 np0005590810 nova_compute[251104]: 2026-01-21 16:42:38.183 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:38 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:38.182 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:42:38 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:38.184 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:42:38 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:42:38.184 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:42:38 np0005590810 nova_compute[251104]: 2026-01-21 16:42:38.363 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:38 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:42:38 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:42:38 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:42:38 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:42:38 np0005590810 podman[272345]: 2026-01-21 16:42:38.512740269 +0000 UTC m=+0.079827266 container create 9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_panini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 21 11:42:38 np0005590810 podman[272345]: 2026-01-21 16:42:38.457373462 +0000 UTC m=+0.024460469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:42:38 np0005590810 systemd[1]: Started libpod-conmon-9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae.scope.
Jan 21 11:42:38 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:42:38 np0005590810 podman[272345]: 2026-01-21 16:42:38.615612819 +0000 UTC m=+0.182699836 container init 9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 21 11:42:38 np0005590810 podman[272345]: 2026-01-21 16:42:38.624816914 +0000 UTC m=+0.191903911 container start 9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 21 11:42:38 np0005590810 trusting_panini[272362]: 167 167
Jan 21 11:42:38 np0005590810 systemd[1]: libpod-9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae.scope: Deactivated successfully.
Jan 21 11:42:38 np0005590810 podman[272345]: 2026-01-21 16:42:38.671651766 +0000 UTC m=+0.238738793 container attach 9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_panini, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:42:38 np0005590810 podman[272345]: 2026-01-21 16:42:38.672432871 +0000 UTC m=+0.239519868 container died 9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 21 11:42:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:42:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:38.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:42:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:38.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:38 np0005590810 systemd[1]: var-lib-containers-storage-overlay-412c446ff72ff828547c9ab051fce9b0b86562cd6607afb86d3c2ce69f7e7b21-merged.mount: Deactivated successfully.
Jan 21 11:42:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:38.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:42:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:38.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:42:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:38.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:42:38 np0005590810 podman[272345]: 2026-01-21 16:42:38.880604296 +0000 UTC m=+0.447691303 container remove 9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_panini, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:42:38 np0005590810 systemd[1]: libpod-conmon-9e457e18ef231efe81494b63c89827972a7819d8a2ddbabd4512d89e37d94eae.scope: Deactivated successfully.
Jan 21 11:42:38 np0005590810 podman[272380]: 2026-01-21 16:42:38.992351651 +0000 UTC m=+0.190210959 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 11:42:39 np0005590810 podman[272414]: 2026-01-21 16:42:39.131031631 +0000 UTC m=+0.111490818 container create 3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 11:42:39 np0005590810 podman[272414]: 2026-01-21 16:42:39.052521407 +0000 UTC m=+0.032980614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:42:39 np0005590810 systemd[1]: Started libpod-conmon-3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b.scope.
Jan 21 11:42:39 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:42:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4d9133af6491f5b4d5161d68d1ec88f57da6a589f5135ccf27577b58dd0a4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4d9133af6491f5b4d5161d68d1ec88f57da6a589f5135ccf27577b58dd0a4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4d9133af6491f5b4d5161d68d1ec88f57da6a589f5135ccf27577b58dd0a4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4d9133af6491f5b4d5161d68d1ec88f57da6a589f5135ccf27577b58dd0a4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:39 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d4d9133af6491f5b4d5161d68d1ec88f57da6a589f5135ccf27577b58dd0a4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:42:39
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'backups', '.nfs']
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:42:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:42:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:42:39 np0005590810 podman[272414]: 2026-01-21 16:42:39.329412723 +0000 UTC m=+0.309871910 container init 3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 21 11:42:39 np0005590810 podman[272414]: 2026-01-21 16:42:39.338599968 +0000 UTC m=+0.319059155 container start 3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curran, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 21 11:42:39 np0005590810 podman[272414]: 2026-01-21 16:42:39.343151569 +0000 UTC m=+0.323610746 container attach 3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curran, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 152 KiB/s wr, 43 op/s
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011050157297974865 of space, bias 1.0, pg target 0.33150471893924593 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:42:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:42:39 np0005590810 blissful_curran[272432]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:42:39 np0005590810 blissful_curran[272432]: --> All data devices are unavailable
Jan 21 11:42:39 np0005590810 systemd[1]: libpod-3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b.scope: Deactivated successfully.
Jan 21 11:42:39 np0005590810 podman[272414]: 2026-01-21 16:42:39.732568345 +0000 UTC m=+0.713027532 container died 3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:42:39 np0005590810 systemd[1]: var-lib-containers-storage-overlay-3d4d9133af6491f5b4d5161d68d1ec88f57da6a589f5135ccf27577b58dd0a4f-merged.mount: Deactivated successfully.
Jan 21 11:42:39 np0005590810 podman[272414]: 2026-01-21 16:42:39.779530422 +0000 UTC m=+0.759989609 container remove 3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:42:39 np0005590810 systemd[1]: libpod-conmon-3c623beac85c7362450cb1cb278a5bce4ac5ed5a55a891b9c412f427d938631b.scope: Deactivated successfully.
Jan 21 11:42:40 np0005590810 nova_compute[251104]: 2026-01-21 16:42:40.380 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:40 np0005590810 podman[272548]: 2026-01-21 16:42:40.409408434 +0000 UTC m=+0.044522932 container create 891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 11:42:40 np0005590810 systemd[1]: Started libpod-conmon-891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057.scope.
Jan 21 11:42:40 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:42:40 np0005590810 podman[272548]: 2026-01-21 16:42:40.391364714 +0000 UTC m=+0.026479232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:42:40 np0005590810 podman[272548]: 2026-01-21 16:42:40.487535346 +0000 UTC m=+0.122649864 container init 891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:42:40 np0005590810 podman[272548]: 2026-01-21 16:42:40.495311837 +0000 UTC m=+0.130426335 container start 891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:42:40 np0005590810 podman[272548]: 2026-01-21 16:42:40.498386052 +0000 UTC m=+0.133500580 container attach 891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 21 11:42:40 np0005590810 magical_brahmagupta[272564]: 167 167
Jan 21 11:42:40 np0005590810 systemd[1]: libpod-891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057.scope: Deactivated successfully.
Jan 21 11:42:40 np0005590810 podman[272548]: 2026-01-21 16:42:40.500401695 +0000 UTC m=+0.135516223 container died 891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:42:40 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9f9e1ee65c1d898ea964bc7afdb69bbfe415e6d1cb9e699f3ef99865472f89e9-merged.mount: Deactivated successfully.
Jan 21 11:42:40 np0005590810 podman[272548]: 2026-01-21 16:42:40.539649482 +0000 UTC m=+0.174763990 container remove 891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_brahmagupta, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:42:40 np0005590810 systemd[1]: libpod-conmon-891e8e3659800413cb0f5ccfd71a5e7c5d82ff774ee10bfe50ca2dc986147057.scope: Deactivated successfully.
Jan 21 11:42:40 np0005590810 nova_compute[251104]: 2026-01-21 16:42:40.713 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:40 np0005590810 podman[272590]: 2026-01-21 16:42:40.717031343 +0000 UTC m=+0.046034219 container create 1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Jan 21 11:42:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:42:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:42:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:40.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:40 np0005590810 systemd[1]: Started libpod-conmon-1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc.scope.
Jan 21 11:42:40 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:42:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9e3954115fb3041047c07e865166f5a5febc1eaa020564a60c06e660688f3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9e3954115fb3041047c07e865166f5a5febc1eaa020564a60c06e660688f3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9e3954115fb3041047c07e865166f5a5febc1eaa020564a60c06e660688f3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:40 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9e3954115fb3041047c07e865166f5a5febc1eaa020564a60c06e660688f3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:40 np0005590810 podman[272590]: 2026-01-21 16:42:40.789446628 +0000 UTC m=+0.118449524 container init 1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 21 11:42:40 np0005590810 podman[272590]: 2026-01-21 16:42:40.698822287 +0000 UTC m=+0.027825193 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:42:40 np0005590810 podman[272590]: 2026-01-21 16:42:40.796713713 +0000 UTC m=+0.125716589 container start 1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:42:40 np0005590810 podman[272590]: 2026-01-21 16:42:40.800269653 +0000 UTC m=+0.129272529 container attach 1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]: {
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:    "0": [
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:        {
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "devices": [
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "/dev/loop3"
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            ],
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "lv_name": "ceph_lv0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "lv_size": "21470642176",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "name": "ceph_lv0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "tags": {
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.cluster_name": "ceph",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.crush_device_class": "",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.encrypted": "0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.osd_id": "0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.type": "block",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.vdo": "0",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:                "ceph.with_tpm": "0"
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            },
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "type": "block",
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:            "vg_name": "ceph_vg0"
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:        }
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]:    ]
Jan 21 11:42:41 np0005590810 optimistic_ramanujan[272607]: }
Jan 21 11:42:41 np0005590810 systemd[1]: libpod-1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc.scope: Deactivated successfully.
Jan 21 11:42:41 np0005590810 podman[272590]: 2026-01-21 16:42:41.118492111 +0000 UTC m=+0.447495007 container died 1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 21 11:42:41 np0005590810 systemd[1]: var-lib-containers-storage-overlay-cc9e3954115fb3041047c07e865166f5a5febc1eaa020564a60c06e660688f3b-merged.mount: Deactivated successfully.
Jan 21 11:42:41 np0005590810 podman[272590]: 2026-01-21 16:42:41.192045472 +0000 UTC m=+0.521048388 container remove 1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 11:42:41 np0005590810 systemd[1]: libpod-conmon-1500e312527fc22abef1365bfa9541788763ba494cc98a0af224f46f9c38bcdc.scope: Deactivated successfully.
Jan 21 11:42:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 168 KiB/s wr, 134 op/s
Jan 21 11:42:41 np0005590810 podman[272721]: 2026-01-21 16:42:41.828395885 +0000 UTC m=+0.051804308 container create 5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:42:41 np0005590810 systemd[1]: Started libpod-conmon-5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f.scope.
Jan 21 11:42:41 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:42:41 np0005590810 podman[272721]: 2026-01-21 16:42:41.806922039 +0000 UTC m=+0.030330492 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:42:41 np0005590810 podman[272721]: 2026-01-21 16:42:41.907768576 +0000 UTC m=+0.131176999 container init 5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 11:42:41 np0005590810 podman[272721]: 2026-01-21 16:42:41.914184895 +0000 UTC m=+0.137593318 container start 5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 21 11:42:41 np0005590810 podman[272721]: 2026-01-21 16:42:41.916954011 +0000 UTC m=+0.140362434 container attach 5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:42:41 np0005590810 intelligent_stonebraker[272737]: 167 167
Jan 21 11:42:41 np0005590810 systemd[1]: libpod-5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f.scope: Deactivated successfully.
Jan 21 11:42:41 np0005590810 podman[272721]: 2026-01-21 16:42:41.91887738 +0000 UTC m=+0.142285803 container died 5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Jan 21 11:42:41 np0005590810 systemd[1]: var-lib-containers-storage-overlay-1ff56d6aea8fde11e7c287c74aa7b7348e925df6d9275c986e2881d7e93bac59-merged.mount: Deactivated successfully.
Jan 21 11:42:41 np0005590810 podman[272721]: 2026-01-21 16:42:41.954775724 +0000 UTC m=+0.178184147 container remove 5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 21 11:42:41 np0005590810 systemd[1]: libpod-conmon-5f0121af16beb4247ab42368aa788837aee40663caedeb5e424bcf902093d86f.scope: Deactivated successfully.
Jan 21 11:42:42 np0005590810 podman[272761]: 2026-01-21 16:42:42.134344652 +0000 UTC m=+0.044904393 container create a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 11:42:42 np0005590810 systemd[1]: Started libpod-conmon-a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18.scope.
Jan 21 11:42:42 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:42:42 np0005590810 podman[272761]: 2026-01-21 16:42:42.116688695 +0000 UTC m=+0.027248466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:42:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c271ca509efb0aec7ba00ee27ce6d33a1428b2d753d99b330fd54159cbd22736/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c271ca509efb0aec7ba00ee27ce6d33a1428b2d753d99b330fd54159cbd22736/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c271ca509efb0aec7ba00ee27ce6d33a1428b2d753d99b330fd54159cbd22736/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:42 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c271ca509efb0aec7ba00ee27ce6d33a1428b2d753d99b330fd54159cbd22736/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:42:42 np0005590810 podman[272761]: 2026-01-21 16:42:42.22813574 +0000 UTC m=+0.138695511 container init a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bartik, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 21 11:42:42 np0005590810 podman[272761]: 2026-01-21 16:42:42.234447626 +0000 UTC m=+0.145007377 container start a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bartik, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 21 11:42:42 np0005590810 podman[272761]: 2026-01-21 16:42:42.237896913 +0000 UTC m=+0.148456664 container attach a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:42:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:42:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:42:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:42.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:42 np0005590810 lvm[272853]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:42:42 np0005590810 lvm[272853]: VG ceph_vg0 finished
Jan 21 11:42:42 np0005590810 hardcore_bartik[272777]: {}
Jan 21 11:42:42 np0005590810 systemd[1]: libpod-a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18.scope: Deactivated successfully.
Jan 21 11:42:42 np0005590810 systemd[1]: libpod-a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18.scope: Consumed 1.226s CPU time.
Jan 21 11:42:42 np0005590810 podman[272761]: 2026-01-21 16:42:42.972515613 +0000 UTC m=+0.883075364 container died a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 21 11:42:43 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c271ca509efb0aec7ba00ee27ce6d33a1428b2d753d99b330fd54159cbd22736-merged.mount: Deactivated successfully.
Jan 21 11:42:43 np0005590810 podman[272761]: 2026-01-21 16:42:43.111458141 +0000 UTC m=+1.022017892 container remove a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bartik, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:42:43 np0005590810 systemd[1]: libpod-conmon-a365703ca0fe66640fb1b077e6faab05e28a99a95738e0fd23541858bedb1b18.scope: Deactivated successfully.
Jan 21 11:42:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:42:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:42:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:42:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:42:43 np0005590810 nova_compute[251104]: 2026-01-21 16:42:43.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:43 np0005590810 nova_compute[251104]: 2026-01-21 16:42:43.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 168 KiB/s wr, 134 op/s
Jan 21 11:42:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:42:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:42:44 np0005590810 nova_compute[251104]: 2026-01-21 16:42:44.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:42:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:44.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:42:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:44.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.369 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.384 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:45] "GET /metrics HTTP/1.1" 200 48679 "" "Prometheus/2.51.0"
Jan 21 11:42:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:45] "GET /metrics HTTP/1.1" 200 48679 "" "Prometheus/2.51.0"
Jan 21 11:42:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 21 KiB/s wr, 92 op/s
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.714 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.948 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.948 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquired lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.948 251108 DEBUG nova.network.neutron [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 21 11:42:45 np0005590810 nova_compute[251104]: 2026-01-21 16:42:45.948 251108 DEBUG nova.objects.instance [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7e84b1a2-5047-4d10-a2f2-f18fb832420f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:42:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:46.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:42:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:46.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:42:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:47.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:42:47 np0005590810 nova_compute[251104]: 2026-01-21 16:42:47.305 251108 DEBUG nova.network.neutron [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:42:47 np0005590810 nova_compute[251104]: 2026-01-21 16:42:47.324 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Releasing lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:42:47 np0005590810 nova_compute[251104]: 2026-01-21 16:42:47.324 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 21 11:42:47 np0005590810 nova_compute[251104]: 2026-01-21 16:42:47.325 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 78 op/s
Jan 21 11:42:48 np0005590810 nova_compute[251104]: 2026-01-21 16:42:48.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:48 np0005590810 nova_compute[251104]: 2026-01-21 16:42:48.392 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:48 np0005590810 nova_compute[251104]: 2026-01-21 16:42:48.392 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:48 np0005590810 nova_compute[251104]: 2026-01-21 16:42:48.392 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:48 np0005590810 nova_compute[251104]: 2026-01-21 16:42:48.393 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:42:48 np0005590810 nova_compute[251104]: 2026-01-21 16:42:48.393 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.490005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013768490052, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2334, "num_deletes": 253, "total_data_size": 4814791, "memory_usage": 4903992, "flush_reason": "Manual Compaction"}
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013768534877, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4653497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29557, "largest_seqno": 31890, "table_properties": {"data_size": 4643028, "index_size": 6451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 25201, "raw_average_key_size": 21, "raw_value_size": 4620606, "raw_average_value_size": 3979, "num_data_blocks": 275, "num_entries": 1161, "num_filter_entries": 1161, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769013574, "oldest_key_time": 1769013574, "file_creation_time": 1769013768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 44940 microseconds, and 13031 cpu microseconds.
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.534938) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4653497 bytes OK
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.534969) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.537619) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.537646) EVENT_LOG_v1 {"time_micros": 1769013768537640, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.537672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4804740, prev total WAL file size 4804740, number of live WAL files 2.
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.539286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4544KB)], [65(10MB)]
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013768539330, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 15635725, "oldest_snapshot_seqno": -1}
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6376 keys, 13459846 bytes, temperature: kUnknown
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013768625332, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 13459846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13417598, "index_size": 25178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 163858, "raw_average_key_size": 25, "raw_value_size": 13303244, "raw_average_value_size": 2086, "num_data_blocks": 1010, "num_entries": 6376, "num_filter_entries": 6376, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769011368, "oldest_key_time": 0, "file_creation_time": 1769013768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0497f0ab-ed87-45ae-8fdc-cddb57c7bd9d", "db_session_id": "6KF744HPATS83NMB4LEU", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.625666) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13459846 bytes
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.627266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.6 rd, 156.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.4, 10.5 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 6920, records dropped: 544 output_compression: NoCompression
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.627289) EVENT_LOG_v1 {"time_micros": 1769013768627279, "job": 36, "event": "compaction_finished", "compaction_time_micros": 86110, "compaction_time_cpu_micros": 38879, "output_level": 6, "num_output_files": 1, "total_output_size": 13459846, "num_input_records": 6920, "num_output_records": 6376, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013768628683, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769013768631351, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.539157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.631404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.631409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.631411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.631413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: rocksdb: (Original Log Time 2026/01/21-16:42:48.631415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 11:42:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:48.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:48.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:48.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:42:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:48.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:42:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734772595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:42:48 np0005590810 nova_compute[251104]: 2026-01-21 16:42:48.936 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.014 251108 DEBUG nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.015 251108 DEBUG nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.174 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.176 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4385MB free_disk=59.92182540893555GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.176 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.176 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.275 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Instance 7e84b1a2-5047-4d10-a2f2-f18fb832420f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.276 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.276 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.313 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:42:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 65 op/s
Jan 21 11:42:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:42:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/75016783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.824 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.831 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.856 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.889 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:42:49 np0005590810 nova_compute[251104]: 2026-01-21 16:42:49.889 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:50 np0005590810 nova_compute[251104]: 2026-01-21 16:42:50.386 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:50 np0005590810 nova_compute[251104]: 2026-01-21 16:42:50.717 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:50.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:50 np0005590810 nova_compute[251104]: 2026-01-21 16:42:50.890 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:50 np0005590810 nova_compute[251104]: 2026-01-21 16:42:50.891 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:42:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 21 11:42:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:42:51 np0005590810 ceph-mon[74380]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7127 writes, 31K keys, 7125 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 7126 writes, 7124 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1572 writes, 6899 keys, 1572 commit groups, 1.0 writes per commit group, ingest: 12.06 MB, 0.02 MB/s#012Interval WAL: 1572 writes, 1572 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     95.7      0.49              0.14        18    0.027       0      0       0.0       0.0#012  L6      1/0   12.84 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    152.2    129.4      1.48              0.48        17    0.087     93K   9409       0.0       0.0#012 Sum      1/0   12.84 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    114.6    121.1      1.97              0.62        35    0.056     93K   9409       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.1    129.9    134.2      0.47              0.16         8    0.058     26K   2616       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    152.2    129.4      1.48              0.48        17    0.087     93K   9409       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     96.5      0.48              0.14        17    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.0      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.045, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.10 MB/s write, 0.22 GB read, 0.09 MB/s read, 2.0 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e6f7731350#2 capacity: 304.00 MB usage: 21.55 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000215 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1200,20.82 MB,6.85006%) FilterBlock(36,267.92 KB,0.0860666%) IndexBlock(36,474.39 KB,0.152392%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 11:42:52 np0005590810 nova_compute[251104]: 2026-01-21 16:42:52.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:42:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:52.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 21 11:42:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:42:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:42:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:54.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:54.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:55 np0005590810 nova_compute[251104]: 2026-01-21 16:42:55.162 251108 INFO nova.compute.manager [None req-2d60d31e-df7a-40d5-ae32-62e0de5712ff 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Get console output#033[00m
Jan 21 11:42:55 np0005590810 nova_compute[251104]: 2026-01-21 16:42:55.170 260713 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 21 11:42:55 np0005590810 nova_compute[251104]: 2026-01-21 16:42:55.388 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:55] "GET /metrics HTTP/1.1" 200 48676 "" "Prometheus/2.51.0"
Jan 21 11:42:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:42:55] "GET /metrics HTTP/1.1" 200 48676 "" "Prometheus/2.51.0"
Jan 21 11:42:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 21 11:42:55 np0005590810 nova_compute[251104]: 2026-01-21 16:42:55.720 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:42:56 np0005590810 nova_compute[251104]: 2026-01-21 16:42:56.306 251108 DEBUG nova.compute.manager [req-30f12d0e-4cc5-44b4-86ba-c9099ade9e60 req-85f3c4db-daa4-4f28-b437-938314ad0959 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:42:56 np0005590810 nova_compute[251104]: 2026-01-21 16:42:56.306 251108 DEBUG nova.compute.manager [req-30f12d0e-4cc5-44b4-86ba-c9099ade9e60 req-85f3c4db-daa4-4f28-b437-938314ad0959 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing instance network info cache due to event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 21 11:42:56 np0005590810 nova_compute[251104]: 2026-01-21 16:42:56.307 251108 DEBUG oslo_concurrency.lockutils [req-30f12d0e-4cc5-44b4-86ba-c9099ade9e60 req-85f3c4db-daa4-4f28-b437-938314ad0959 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:42:56 np0005590810 nova_compute[251104]: 2026-01-21 16:42:56.307 251108 DEBUG oslo_concurrency.lockutils [req-30f12d0e-4cc5-44b4-86ba-c9099ade9e60 req-85f3c4db-daa4-4f28-b437-938314ad0959 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquired lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:42:56 np0005590810 nova_compute[251104]: 2026-01-21 16:42:56.307 251108 DEBUG nova.network.neutron [req-30f12d0e-4cc5-44b4-86ba-c9099ade9e60 req-85f3c4db-daa4-4f28-b437-938314ad0959 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 21 11:42:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:42:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:42:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:56.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:42:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:42:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:56.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:42:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:57.215Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:42:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:57.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:42:57 np0005590810 nova_compute[251104]: 2026-01-21 16:42:57.345 251108 INFO nova.compute.manager [None req-a42d874a-9cee-4ddb-a2da-75db7c01100d 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Get console output#033[00m
Jan 21 11:42:57 np0005590810 nova_compute[251104]: 2026-01-21 16:42:57.352 260713 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 21 11:42:57 np0005590810 nova_compute[251104]: 2026-01-21 16:42:57.606 251108 DEBUG nova.network.neutron [req-30f12d0e-4cc5-44b4-86ba-c9099ade9e60 req-85f3c4db-daa4-4f28-b437-938314ad0959 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updated VIF entry in instance network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 21 11:42:57 np0005590810 nova_compute[251104]: 2026-01-21 16:42:57.607 251108 DEBUG nova.network.neutron [req-30f12d0e-4cc5-44b4-86ba-c9099ade9e60 req-85f3c4db-daa4-4f28-b437-938314ad0959 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:42:57 np0005590810 nova_compute[251104]: 2026-01-21 16:42:57.623 251108 DEBUG oslo_concurrency.lockutils [req-30f12d0e-4cc5-44b4-86ba-c9099ade9e60 req-85f3c4db-daa4-4f28-b437-938314ad0959 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Releasing lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:42:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.410 251108 DEBUG nova.compute.manager [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-unplugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.411 251108 DEBUG oslo_concurrency.lockutils [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.411 251108 DEBUG oslo_concurrency.lockutils [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.411 251108 DEBUG oslo_concurrency.lockutils [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.411 251108 DEBUG nova.compute.manager [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] No waiting events found dispatching network-vif-unplugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.412 251108 WARNING nova.compute.manager [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received unexpected event network-vif-unplugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 for instance with vm_state active and task_state None.#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.412 251108 DEBUG nova.compute.manager [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.412 251108 DEBUG oslo_concurrency.lockutils [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.412 251108 DEBUG oslo_concurrency.lockutils [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.412 251108 DEBUG oslo_concurrency.lockutils [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.412 251108 DEBUG nova.compute.manager [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] No waiting events found dispatching network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:42:58 np0005590810 nova_compute[251104]: 2026-01-21 16:42:58.413 251108 WARNING nova.compute.manager [req-4b4db314-6603-4c79-9233-0ba93f6ce02c req-77945a3c-a7e2-4d35-a5b2-e05a5a439d4a 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received unexpected event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 for instance with vm_state active and task_state None.#033[00m
Jan 21 11:42:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:42:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:42:58.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:42:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:42:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:42:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:42:58.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:42:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:58.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:42:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:42:58.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:42:59 np0005590810 nova_compute[251104]: 2026-01-21 16:42:59.582 251108 DEBUG nova.compute.manager [req-a09c0e51-7509-44cc-84cb-660a6165909c req-2dd6b87d-8967-4c61-8e5a-bd3f719e2630 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:42:59 np0005590810 nova_compute[251104]: 2026-01-21 16:42:59.583 251108 DEBUG nova.compute.manager [req-a09c0e51-7509-44cc-84cb-660a6165909c req-2dd6b87d-8967-4c61-8e5a-bd3f719e2630 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing instance network info cache due to event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 21 11:42:59 np0005590810 nova_compute[251104]: 2026-01-21 16:42:59.583 251108 DEBUG oslo_concurrency.lockutils [req-a09c0e51-7509-44cc-84cb-660a6165909c req-2dd6b87d-8967-4c61-8e5a-bd3f719e2630 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:42:59 np0005590810 nova_compute[251104]: 2026-01-21 16:42:59.583 251108 DEBUG oslo_concurrency.lockutils [req-a09c0e51-7509-44cc-84cb-660a6165909c req-2dd6b87d-8967-4c61-8e5a-bd3f719e2630 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquired lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:42:59 np0005590810 nova_compute[251104]: 2026-01-21 16:42:59.583 251108 DEBUG nova.network.neutron [req-a09c0e51-7509-44cc-84cb-660a6165909c req-2dd6b87d-8967-4c61-8e5a-bd3f719e2630 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 21 11:42:59 np0005590810 nova_compute[251104]: 2026-01-21 16:42:59.691 251108 INFO nova.compute.manager [None req-be7b9c27-13f3-499a-b73d-febadf2755dd 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Get console output#033[00m
Jan 21 11:42:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 21 11:42:59 np0005590810 nova_compute[251104]: 2026-01-21 16:42:59.696 260713 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 21 11:43:00 np0005590810 nova_compute[251104]: 2026-01-21 16:43:00.391 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:00 np0005590810 nova_compute[251104]: 2026-01-21 16:43:00.722 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:00.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:00.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.544 251108 DEBUG nova.compute.manager [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.544 251108 DEBUG oslo_concurrency.lockutils [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.545 251108 DEBUG oslo_concurrency.lockutils [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.545 251108 DEBUG oslo_concurrency.lockutils [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.545 251108 DEBUG nova.compute.manager [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] No waiting events found dispatching network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.545 251108 WARNING nova.compute.manager [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received unexpected event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 for instance with vm_state active and task_state None.#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.546 251108 DEBUG nova.compute.manager [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.546 251108 DEBUG oslo_concurrency.lockutils [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.546 251108 DEBUG oslo_concurrency.lockutils [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.546 251108 DEBUG oslo_concurrency.lockutils [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.546 251108 DEBUG nova.compute.manager [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] No waiting events found dispatching network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:43:01 np0005590810 nova_compute[251104]: 2026-01-21 16:43:01.547 251108 WARNING nova.compute.manager [req-5b583322-c439-4251-a614-67d6cdca7d6e req-8d53bfa4-40ee-423d-a7ea-89f96b93bc7b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received unexpected event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 for instance with vm_state active and task_state None.#033[00m
Jan 21 11:43:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 21 11:43:02 np0005590810 nova_compute[251104]: 2026-01-21 16:43:02.003 251108 DEBUG nova.network.neutron [req-a09c0e51-7509-44cc-84cb-660a6165909c req-2dd6b87d-8967-4c61-8e5a-bd3f719e2630 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updated VIF entry in instance network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 21 11:43:02 np0005590810 nova_compute[251104]: 2026-01-21 16:43:02.003 251108 DEBUG nova.network.neutron [req-a09c0e51-7509-44cc-84cb-660a6165909c req-2dd6b87d-8967-4c61-8e5a-bd3f719e2630 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:43:02 np0005590810 nova_compute[251104]: 2026-01-21 16:43:02.026 251108 DEBUG oslo_concurrency.lockutils [req-a09c0e51-7509-44cc-84cb-660a6165909c req-2dd6b87d-8967-4c61-8e5a-bd3f719e2630 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Releasing lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:43:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:02.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 15 KiB/s wr, 2 op/s
Jan 21 11:43:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:04.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:04.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:05 np0005590810 nova_compute[251104]: 2026-01-21 16:43:05.393 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:05] "GET /metrics HTTP/1.1" 200 48676 "" "Prometheus/2.51.0"
Jan 21 11:43:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:05] "GET /metrics HTTP/1.1" 200 48676 "" "Prometheus/2.51.0"
Jan 21 11:43:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 20 KiB/s wr, 31 op/s
Jan 21 11:43:05 np0005590810 nova_compute[251104]: 2026-01-21 16:43:05.724 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.600 251108 DEBUG nova.compute.manager [req-606982a3-795e-4b8b-ad88-553aed5bf3f1 req-b981b108-59eb-4e06-9373-c3ba41a89c3f 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.600 251108 DEBUG nova.compute.manager [req-606982a3-795e-4b8b-ad88-553aed5bf3f1 req-b981b108-59eb-4e06-9373-c3ba41a89c3f 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing instance network info cache due to event network-changed-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.601 251108 DEBUG oslo_concurrency.lockutils [req-606982a3-795e-4b8b-ad88-553aed5bf3f1 req-b981b108-59eb-4e06-9373-c3ba41a89c3f 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.601 251108 DEBUG oslo_concurrency.lockutils [req-606982a3-795e-4b8b-ad88-553aed5bf3f1 req-b981b108-59eb-4e06-9373-c3ba41a89c3f 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquired lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.601 251108 DEBUG nova.network.neutron [req-606982a3-795e-4b8b-ad88-553aed5bf3f1 req-b981b108-59eb-4e06-9373-c3ba41a89c3f 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Refreshing network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.693 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.695 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.695 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.695 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.695 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.697 251108 INFO nova.compute.manager [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Terminating instance#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.698 251108 DEBUG nova.compute.manager [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 21 11:43:06 np0005590810 kernel: tapb3ff0b81-0a (unregistering): left promiscuous mode
Jan 21 11:43:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:06.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:06 np0005590810 NetworkManager[48894]: <info>  [1769013786.7622] device (tapb3ff0b81-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 21 11:43:06 np0005590810 ovn_controller[152632]: 2026-01-21T16:43:06Z|00084|binding|INFO|Releasing lport b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 from this chassis (sb_readonly=0)
Jan 21 11:43:06 np0005590810 ovn_controller[152632]: 2026-01-21T16:43:06Z|00085|binding|INFO|Setting lport b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 down in Southbound
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.771 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:06 np0005590810 ovn_controller[152632]: 2026-01-21T16:43:06Z|00086|binding|INFO|Removing iface tapb3ff0b81-0a ovn-installed in OVS
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.774 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:06 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:06.786 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:13:b6 10.100.0.5'], port_security=['fa:16:3e:57:13:b6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7e84b1a2-5047-4d10-a2f2-f18fb832420f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02c85004-4705-4aed-8c2b-9592f54dd920', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d6214185b004f9c9798abfc29d1ae14', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c2533930-eed5-4ea0-a1b7-bfc1d86b4b1f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53c5efc4-3c1a-4340-9a3f-2f6f0aff9289, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>], logical_port=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f61aaf86640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:43:06 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:06.788 163593 INFO neutron.agent.ovn.metadata.agent [-] Port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 in datapath 02c85004-4705-4aed-8c2b-9592f54dd920 unbound from our chassis#033[00m
Jan 21 11:43:06 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:06.789 163593 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02c85004-4705-4aed-8c2b-9592f54dd920, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 21 11:43:06 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:06.790 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[c0437b94-99b8-4b1d-832c-1eb6644afab8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:43:06 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:06.791 163593 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920 namespace which is not needed anymore#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.792 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:06 np0005590810 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 21 11:43:06 np0005590810 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 15.450s CPU time.
Jan 21 11:43:06 np0005590810 systemd-machined[217254]: Machine qemu-4-instance-0000000b terminated.
Jan 21 11:43:06 np0005590810 podman[273014]: 2026-01-21 16:43:06.887349744 +0000 UTC m=+0.090652402 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:43:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:06.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.923 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.929 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:06 np0005590810 neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920[272090]: [NOTICE]   (272094) : haproxy version is 2.8.14-c23fe91
Jan 21 11:43:06 np0005590810 neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920[272090]: [NOTICE]   (272094) : path to executable is /usr/sbin/haproxy
Jan 21 11:43:06 np0005590810 neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920[272090]: [WARNING]  (272094) : Exiting Master process...
Jan 21 11:43:06 np0005590810 neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920[272090]: [WARNING]  (272094) : Exiting Master process...
Jan 21 11:43:06 np0005590810 neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920[272090]: [ALERT]    (272094) : Current worker (272096) exited with code 143 (Terminated)
Jan 21 11:43:06 np0005590810 neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920[272090]: [WARNING]  (272094) : All workers exited. Exiting... (0)
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.941 251108 INFO nova.virt.libvirt.driver [-] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Instance destroyed successfully.#033[00m
Jan 21 11:43:06 np0005590810 systemd[1]: libpod-0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff.scope: Deactivated successfully.
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.942 251108 DEBUG nova.objects.instance [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lazy-loading 'resources' on Instance uuid 7e84b1a2-5047-4d10-a2f2-f18fb832420f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 21 11:43:06 np0005590810 podman[273057]: 2026-01-21 16:43:06.946457977 +0000 UTC m=+0.061354013 container died 0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.959 251108 DEBUG nova.virt.libvirt.vif [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-21T16:42:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-169262401',display_name='tempest-TestNetworkBasicOps-server-169262401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-169262401',id=11,image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPqo8ijgDC/VmjoMkIS4OXl3nZQslio/6ZpG6oLieA37YDqhmdueG99K42pXQUKYcd0SudRZ7X6453WpvXEnc80w5WhZFjagGA5Xif2xoVOlTnllyftwuZ5Cg/7ZgrbdWg==',key_name='tempest-TestNetworkBasicOps-1889562769',keypairs=<?>,launch_index=0,launched_at=2026-01-21T16:42:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d6214185b004f9c9798abfc29d1ae14',ramdisk_id='',reservation_id='r-2wbk007n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='437c3ca6-5ffb-4fe3-bf44-c018ba6b23b1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1793517209',owner_user_name='tempest-TestNetworkBasicOps-1793517209-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-21T16:42:15Z,user_data=None,user_id='918cf3fb78394ce8b3ade91a1ad699fc',uuid=7e84b1a2-5047-4d10-a2f2-f18fb832420f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.961 251108 DEBUG nova.network.os_vif_util [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converting VIF {"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.962 251108 DEBUG nova.network.os_vif_util [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:13:b6,bridge_name='br-int',has_traffic_filtering=True,id=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7,network=Network(02c85004-4705-4aed-8c2b-9592f54dd920),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff0b81-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.962 251108 DEBUG os_vif [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:13:b6,bridge_name='br-int',has_traffic_filtering=True,id=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7,network=Network(02c85004-4705-4aed-8c2b-9592f54dd920),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff0b81-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.964 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.964 251108 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3ff0b81-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.967 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:06 np0005590810 nova_compute[251104]: 2026-01-21 16:43:06.970 251108 INFO os_vif [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:13:b6,bridge_name='br-int',has_traffic_filtering=True,id=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7,network=Network(02c85004-4705-4aed-8c2b-9592f54dd920),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3ff0b81-0a')#033[00m
Jan 21 11:43:06 np0005590810 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff-userdata-shm.mount: Deactivated successfully.
Jan 21 11:43:06 np0005590810 systemd[1]: var-lib-containers-storage-overlay-83b0c78822b27ce57ab420da7e97560d332883cfeded8c9724e998ad2939e3ae-merged.mount: Deactivated successfully.
Jan 21 11:43:07 np0005590810 podman[273057]: 2026-01-21 16:43:06.999981557 +0000 UTC m=+0.114877603 container cleanup 0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 11:43:07 np0005590810 systemd[1]: libpod-conmon-0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff.scope: Deactivated successfully.
Jan 21 11:43:07 np0005590810 podman[273117]: 2026-01-21 16:43:07.088709568 +0000 UTC m=+0.056715240 container remove 0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.096 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[35be1bff-7bdc-4555-87e6-9377ec601eb0]: (4, ('Wed Jan 21 04:43:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920 (0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff)\n0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff\nWed Jan 21 04:43:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920 (0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff)\n0231f5a90681e9dbb699ae820b53c81d3f87ffbb4f96aeaab951dc196add49ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.098 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[1050eea5-8c87-406f-806c-23ecc39291a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.099 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02c85004-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:43:07 np0005590810 nova_compute[251104]: 2026-01-21 16:43:07.101 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:07 np0005590810 kernel: tap02c85004-40: left promiscuous mode
Jan 21 11:43:07 np0005590810 nova_compute[251104]: 2026-01-21 16:43:07.118 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.121 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[1453b3b8-21a3-4cdb-8004-7ca8f9ab2577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.141 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[aed9834f-4225-45b8-9dd9-7d7ef35c02d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.143 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[4ede2ef5-ba32-4e77-9a4e-2a1ccf906c6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.159 260432 DEBUG oslo.privsep.daemon [-] privsep: reply[7503ca8d-5f7c-4dbb-8dce-370ce56c0caa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495070, 'reachable_time': 21892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273131, 'error': None, 'target': 'ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.162 163844 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02c85004-4705-4aed-8c2b-9592f54dd920 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 21 11:43:07 np0005590810 systemd[1]: run-netns-ovnmeta\x2d02c85004\x2d4705\x2d4aed\x2d8c2b\x2d9592f54dd920.mount: Deactivated successfully.
Jan 21 11:43:07 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:07.162 163844 DEBUG oslo.privsep.daemon [-] privsep: reply[a48f501c-bb16-49e1-93c2-4333def780a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 11:43:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:07.216Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:43:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:07.216Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:43:07 np0005590810 nova_compute[251104]: 2026-01-21 16:43:07.612 251108 INFO nova.virt.libvirt.driver [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Deleting instance files /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f_del#033[00m
Jan 21 11:43:07 np0005590810 nova_compute[251104]: 2026-01-21 16:43:07.614 251108 INFO nova.virt.libvirt.driver [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Deletion of /var/lib/nova/instances/7e84b1a2-5047-4d10-a2f2-f18fb832420f_del complete#033[00m
Jan 21 11:43:07 np0005590810 nova_compute[251104]: 2026-01-21 16:43:07.677 251108 INFO nova.compute.manager [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Jan 21 11:43:07 np0005590810 nova_compute[251104]: 2026-01-21 16:43:07.678 251108 DEBUG oslo.service.loopingcall [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 21 11:43:07 np0005590810 nova_compute[251104]: 2026-01-21 16:43:07.678 251108 DEBUG nova.compute.manager [-] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 21 11:43:07 np0005590810 nova_compute[251104]: 2026-01-21 16:43:07.678 251108 DEBUG nova.network.neutron [-] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 21 11:43:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.8 KiB/s wr, 30 op/s
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.510 251108 DEBUG nova.network.neutron [req-606982a3-795e-4b8b-ad88-553aed5bf3f1 req-b981b108-59eb-4e06-9373-c3ba41a89c3f 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updated VIF entry in instance network info cache for port b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.510 251108 DEBUG nova.network.neutron [req-606982a3-795e-4b8b-ad88-553aed5bf3f1 req-b981b108-59eb-4e06-9373-c3ba41a89c3f 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [{"id": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "address": "fa:16:3e:57:13:b6", "network": {"id": "02c85004-4705-4aed-8c2b-9592f54dd920", "bridge": "br-int", "label": "tempest-network-smoke--638918670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d6214185b004f9c9798abfc29d1ae14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3ff0b81-0a", "ovs_interfaceid": "b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.536 251108 DEBUG oslo_concurrency.lockutils [req-606982a3-795e-4b8b-ad88-553aed5bf3f1 req-b981b108-59eb-4e06-9373-c3ba41a89c3f 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Releasing lock "refresh_cache-7e84b1a2-5047-4d10-a2f2-f18fb832420f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.537 251108 DEBUG nova.network.neutron [-] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.757 251108 DEBUG nova.compute.manager [req-9bc04e23-8e4c-45f3-9ffb-7e37c8a20428 req-7ac39581-76a1-4cb1-a761-a9bbbdf62b4e 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-deleted-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.758 251108 INFO nova.compute.manager [req-9bc04e23-8e4c-45f3-9ffb-7e37c8a20428 req-7ac39581-76a1-4cb1-a761-a9bbbdf62b4e 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Neutron deleted interface b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7; detaching it from the instance and deleting it from the info cache#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.758 251108 DEBUG nova.network.neutron [req-9bc04e23-8e4c-45f3-9ffb-7e37c8a20428 req-7ac39581-76a1-4cb1-a761-a9bbbdf62b4e 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 21 11:43:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.761 251108 INFO nova.compute.manager [-] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Took 1.08 seconds to deallocate network for instance.#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.769 251108 DEBUG nova.compute.manager [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-unplugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.769 251108 DEBUG oslo_concurrency.lockutils [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.770 251108 DEBUG oslo_concurrency.lockutils [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.770 251108 DEBUG oslo_concurrency.lockutils [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.770 251108 DEBUG nova.compute.manager [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] No waiting events found dispatching network-vif-unplugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.770 251108 DEBUG nova.compute.manager [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-unplugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.771 251108 DEBUG nova.compute.manager [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.771 251108 DEBUG oslo_concurrency.lockutils [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Acquiring lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.771 251108 DEBUG oslo_concurrency.lockutils [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.771 251108 DEBUG oslo_concurrency.lockutils [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.771 251108 DEBUG nova.compute.manager [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] No waiting events found dispatching network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.772 251108 WARNING nova.compute.manager [req-1fc51fc3-36c1-4f4f-91e4-7d7a03d423c9 req-82d3c0a0-15b6-4ecb-8a3f-b1c60255752b 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Received unexpected event network-vif-plugged-b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7 for instance with vm_state active and task_state deleting.#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.800 251108 DEBUG nova.compute.manager [req-9bc04e23-8e4c-45f3-9ffb-7e37c8a20428 req-7ac39581-76a1-4cb1-a761-a9bbbdf62b4e 4888d32151e242ca91ef5065dd76ca7c 52d50a68be524b499cf44dc442e24944 - - default default] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Detach interface failed, port_id=b3ff0b81-0a52-4670-a4f1-04c4aa73b8f7, reason: Instance 7e84b1a2-5047-4d10-a2f2-f18fb832420f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.820 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.820 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:08.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:43:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:08.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:43:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:08.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:43:08 np0005590810 nova_compute[251104]: 2026-01-21 16:43:08.876 251108 DEBUG oslo_concurrency.processutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:43:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:08.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:43:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:43:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:43:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060057310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:43:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:43:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:43:09 np0005590810 nova_compute[251104]: 2026-01-21 16:43:09.370 251108 DEBUG oslo_concurrency.processutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:43:09 np0005590810 nova_compute[251104]: 2026-01-21 16:43:09.377 251108 DEBUG nova.compute.provider_tree [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:43:09 np0005590810 nova_compute[251104]: 2026-01-21 16:43:09.397 251108 DEBUG nova.scheduler.client.report [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:43:09 np0005590810 nova_compute[251104]: 2026-01-21 16:43:09.420 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:09 np0005590810 nova_compute[251104]: 2026-01-21 16:43:09.442 251108 INFO nova.scheduler.client.report [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Deleted allocations for instance 7e84b1a2-5047-4d10-a2f2-f18fb832420f#033[00m
Jan 21 11:43:09 np0005590810 nova_compute[251104]: 2026-01-21 16:43:09.522 251108 DEBUG oslo_concurrency.lockutils [None req-77f2d379-270f-4aba-8953-e153434559d6 918cf3fb78394ce8b3ade91a1ad699fc 3d6214185b004f9c9798abfc29d1ae14 - - default default] Lock "7e84b1a2-5047-4d10-a2f2-f18fb832420f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:43:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:43:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:43:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:43:09 np0005590810 podman[273158]: 2026-01-21 16:43:09.695849165 +0000 UTC m=+0.078228746 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:43:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.8 KiB/s wr, 30 op/s
Jan 21 11:43:10 np0005590810 nova_compute[251104]: 2026-01-21 16:43:10.396 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:10.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:10.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 9.0 KiB/s wr, 58 op/s
Jan 21 11:43:11 np0005590810 nova_compute[251104]: 2026-01-21 16:43:11.967 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:12.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Jan 21 11:43:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:14.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:14.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:15 np0005590810 nova_compute[251104]: 2026-01-21 16:43:15.399 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:15] "GET /metrics HTTP/1.1" 200 48679 "" "Prometheus/2.51.0"
Jan 21 11:43:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:15] "GET /metrics HTTP/1.1" 200 48679 "" "Prometheus/2.51.0"
Jan 21 11:43:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Jan 21 11:43:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:16.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:16.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:16 np0005590810 nova_compute[251104]: 2026-01-21 16:43:16.972 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:17.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:43:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:43:18 np0005590810 nova_compute[251104]: 2026-01-21 16:43:18.072 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:18 np0005590810 nova_compute[251104]: 2026-01-21 16:43:18.156 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:18.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:43:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:18.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:43:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:18.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:43:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 11:43:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4275343737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 11:43:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 11:43:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4275343737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 11:43:20 np0005590810 nova_compute[251104]: 2026-01-21 16:43:20.402 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:20.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 21 11:43:21 np0005590810 nova_compute[251104]: 2026-01-21 16:43:21.939 251108 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769013786.9378636, 7e84b1a2-5047-4d10-a2f2-f18fb832420f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 21 11:43:21 np0005590810 nova_compute[251104]: 2026-01-21 16:43:21.939 251108 INFO nova.compute.manager [-] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] VM Stopped (Lifecycle Event)#033[00m
Jan 21 11:43:21 np0005590810 nova_compute[251104]: 2026-01-21 16:43:21.959 251108 DEBUG nova.compute.manager [None req-c87c21f0-b045-4544-b677-96fcead3f412 - - - - - -] [instance: 7e84b1a2-5047-4d10-a2f2-f18fb832420f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 21 11:43:21 np0005590810 nova_compute[251104]: 2026-01-21 16:43:21.975 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:22.035 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:22.036 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:22.036 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:22.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:43:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:43:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:43:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:24.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:24.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:25 np0005590810 nova_compute[251104]: 2026-01-21 16:43:25.404 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:25] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:43:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:25] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:43:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:43:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:26.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:26.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:26 np0005590810 nova_compute[251104]: 2026-01-21 16:43:26.978 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:27.218Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:43:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:27.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:43:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:43:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:28.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:28.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:43:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:28.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:43:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:43:30 np0005590810 nova_compute[251104]: 2026-01-21 16:43:30.406 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:30.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:43:31 np0005590810 nova_compute[251104]: 2026-01-21 16:43:31.982 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:32.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:43:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:34.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:35 np0005590810 nova_compute[251104]: 2026-01-21 16:43:35.407 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:35] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:43:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:35] "GET /metrics HTTP/1.1" 200 48657 "" "Prometheus/2.51.0"
Jan 21 11:43:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 88 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 21 11:43:36 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-crash-compute-0[79851]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 21 11:43:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:36.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:36.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:36 np0005590810 nova_compute[251104]: 2026-01-21 16:43:36.984 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:37.219Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:43:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:37.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:43:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 88 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 21 11:43:37 np0005590810 podman[273239]: 2026-01-21 16:43:37.718495565 +0000 UTC m=+0.092036605 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:43:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:38.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:38.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:43:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:38.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:43:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:38.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:43:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:43:39
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.log', 'backups', '.nfs']
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:43:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:43:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:43:39 np0005590810 nova_compute[251104]: 2026-01-21 16:43:39.362 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 88 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:43:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:43:40 np0005590810 nova_compute[251104]: 2026-01-21 16:43:40.409 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:40 np0005590810 podman[273261]: 2026-01-21 16:43:40.722086484 +0000 UTC m=+0.099305069 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 21 11:43:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:40.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:43:41 np0005590810 nova_compute[251104]: 2026-01-21 16:43:41.988 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:42.002 163593 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:19:7b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:3b:98:31:96:2a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 11:43:42 np0005590810 nova_compute[251104]: 2026-01-21 16:43:42.002 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:42 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:42.003 163593 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 11:43:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:42.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:42.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:43 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:43:43.006 163593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f6e8413f-2ba2-49cb-8bd6-36b8085ce01c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 11:43:43 np0005590810 nova_compute[251104]: 2026-01-21 16:43:43.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:43 np0005590810 nova_compute[251104]: 2026-01-21 16:43:43.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 21 11:43:44 np0005590810 nova_compute[251104]: 2026-01-21 16:43:44.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:43:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:43:44 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:43:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:43:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:44.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:43:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:44.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:45 np0005590810 podman[273465]: 2026-01-21 16:43:45.04387414 +0000 UTC m=+0.055047197 container create 96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:43:45 np0005590810 systemd[1]: Started libpod-conmon-96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0.scope.
Jan 21 11:43:45 np0005590810 podman[273465]: 2026-01-21 16:43:45.022079665 +0000 UTC m=+0.033252742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:43:45 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:43:45 np0005590810 podman[273465]: 2026-01-21 16:43:45.146188244 +0000 UTC m=+0.157361321 container init 96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_villani, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:43:45 np0005590810 podman[273465]: 2026-01-21 16:43:45.156284006 +0000 UTC m=+0.167457063 container start 96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_villani, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 11:43:45 np0005590810 podman[273465]: 2026-01-21 16:43:45.159811916 +0000 UTC m=+0.170984973 container attach 96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_villani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:43:45 np0005590810 priceless_villani[273481]: 167 167
Jan 21 11:43:45 np0005590810 systemd[1]: libpod-96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0.scope: Deactivated successfully.
Jan 21 11:43:45 np0005590810 conmon[273481]: conmon 96106a264ae4c00e1570 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0.scope/container/memory.events
Jan 21 11:43:45 np0005590810 podman[273465]: 2026-01-21 16:43:45.165361008 +0000 UTC m=+0.176534065 container died 96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_villani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 21 11:43:45 np0005590810 systemd[1]: var-lib-containers-storage-overlay-d75869001ffcd97bdd1f0524cee2728cf6cf91405dee14a80a62662053ee73a6-merged.mount: Deactivated successfully.
Jan 21 11:43:45 np0005590810 podman[273465]: 2026-01-21 16:43:45.230764427 +0000 UTC m=+0.241937484 container remove 96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:43:45 np0005590810 systemd[1]: libpod-conmon-96106a264ae4c00e1570eeb562f149cb07c04dd8f4b4530bbe9ed432ec441aa0.scope: Deactivated successfully.
Jan 21 11:43:45 np0005590810 nova_compute[251104]: 2026-01-21 16:43:45.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:45 np0005590810 nova_compute[251104]: 2026-01-21 16:43:45.409 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:45 np0005590810 podman[273512]: 2026-01-21 16:43:45.4143768 +0000 UTC m=+0.047362800 container create 941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_carver, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 11:43:45 np0005590810 systemd[1]: Started libpod-conmon-941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7.scope.
Jan 21 11:43:45 np0005590810 podman[273512]: 2026-01-21 16:43:45.391900003 +0000 UTC m=+0.024886023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:43:45 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:43:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fdcf543edd1b08548ebb2e089365c3ecc871ced37d5799bfca74225855d7b4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fdcf543edd1b08548ebb2e089365c3ecc871ced37d5799bfca74225855d7b4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fdcf543edd1b08548ebb2e089365c3ecc871ced37d5799bfca74225855d7b4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fdcf543edd1b08548ebb2e089365c3ecc871ced37d5799bfca74225855d7b4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:45 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fdcf543edd1b08548ebb2e089365c3ecc871ced37d5799bfca74225855d7b4e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:45 np0005590810 podman[273512]: 2026-01-21 16:43:45.515803935 +0000 UTC m=+0.148789955 container init 941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 21 11:43:45 np0005590810 podman[273512]: 2026-01-21 16:43:45.525136934 +0000 UTC m=+0.158122934 container start 941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_carver, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 11:43:45 np0005590810 podman[273512]: 2026-01-21 16:43:45.528111886 +0000 UTC m=+0.161097886 container attach 941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_carver, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:43:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:45] "GET /metrics HTTP/1.1" 200 48679 "" "Prometheus/2.51.0"
Jan 21 11:43:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:45] "GET /metrics HTTP/1.1" 200 48679 "" "Prometheus/2.51.0"
Jan 21 11:43:45 np0005590810 hardcore_carver[273546]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:43:45 np0005590810 hardcore_carver[273546]: --> All data devices are unavailable
Jan 21 11:43:45 np0005590810 systemd[1]: libpod-941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7.scope: Deactivated successfully.
Jan 21 11:43:45 np0005590810 podman[273512]: 2026-01-21 16:43:45.902528258 +0000 UTC m=+0.535514268 container died 941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_carver, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:43:45 np0005590810 systemd[1]: var-lib-containers-storage-overlay-0fdcf543edd1b08548ebb2e089365c3ecc871ced37d5799bfca74225855d7b4e-merged.mount: Deactivated successfully.
Jan 21 11:43:45 np0005590810 podman[273512]: 2026-01-21 16:43:45.945878731 +0000 UTC m=+0.578864741 container remove 941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 11:43:45 np0005590810 systemd[1]: libpod-conmon-941993a4b2552e0dbe38cbb38c8cda21be25d8788bbe420c41bc7e973e338cb7.scope: Deactivated successfully.
Jan 21 11:43:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 21 11:43:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:46 np0005590810 podman[273662]: 2026-01-21 16:43:46.572916175 +0000 UTC m=+0.046061879 container create 4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_dhawan, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:43:46 np0005590810 systemd[1]: Started libpod-conmon-4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434.scope.
Jan 21 11:43:46 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:43:46 np0005590810 podman[273662]: 2026-01-21 16:43:46.554919138 +0000 UTC m=+0.028064872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:43:46 np0005590810 podman[273662]: 2026-01-21 16:43:46.654492045 +0000 UTC m=+0.127637769 container init 4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_dhawan, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:43:46 np0005590810 podman[273662]: 2026-01-21 16:43:46.663101072 +0000 UTC m=+0.136246776 container start 4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Jan 21 11:43:46 np0005590810 podman[273662]: 2026-01-21 16:43:46.665923309 +0000 UTC m=+0.139069063 container attach 4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 21 11:43:46 np0005590810 distracted_dhawan[273679]: 167 167
Jan 21 11:43:46 np0005590810 systemd[1]: libpod-4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434.scope: Deactivated successfully.
Jan 21 11:43:46 np0005590810 podman[273662]: 2026-01-21 16:43:46.668925563 +0000 UTC m=+0.142071287 container died 4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_dhawan, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 11:43:46 np0005590810 systemd[1]: var-lib-containers-storage-overlay-7a562796c705534a469504e7f14bdb36c0538bf2c6da26b1c8ce3c63aaf584ba-merged.mount: Deactivated successfully.
Jan 21 11:43:46 np0005590810 podman[273662]: 2026-01-21 16:43:46.714178336 +0000 UTC m=+0.187324040 container remove 4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:43:46 np0005590810 systemd[1]: libpod-conmon-4607bdc6e446d17f8d8c13bd681ecd71762629adac5953ff611b22fd8b1c9434.scope: Deactivated successfully.
Jan 21 11:43:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:46.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:46 np0005590810 podman[273703]: 2026-01-21 16:43:46.901304088 +0000 UTC m=+0.054660276 container create 41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 21 11:43:46 np0005590810 systemd[1]: Started libpod-conmon-41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8.scope.
Jan 21 11:43:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:46.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:46 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:43:46 np0005590810 podman[273703]: 2026-01-21 16:43:46.881879976 +0000 UTC m=+0.035236174 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:43:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9cc8314dff4d2e12051d8b985c3e5f8caa22c74ad39af2d4fd908031b3b8aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9cc8314dff4d2e12051d8b985c3e5f8caa22c74ad39af2d4fd908031b3b8aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9cc8314dff4d2e12051d8b985c3e5f8caa22c74ad39af2d4fd908031b3b8aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:46 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9cc8314dff4d2e12051d8b985c3e5f8caa22c74ad39af2d4fd908031b3b8aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:46 np0005590810 podman[273703]: 2026-01-21 16:43:46.992404523 +0000 UTC m=+0.145760721 container init 41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 11:43:46 np0005590810 nova_compute[251104]: 2026-01-21 16:43:46.990 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:46 np0005590810 podman[273703]: 2026-01-21 16:43:46.999492513 +0000 UTC m=+0.152848691 container start 41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 21 11:43:47 np0005590810 podman[273703]: 2026-01-21 16:43:47.004211929 +0000 UTC m=+0.157568157 container attach 41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 21 11:43:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:47.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]: {
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:    "0": [
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:        {
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "devices": [
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "/dev/loop3"
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            ],
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "lv_name": "ceph_lv0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "lv_size": "21470642176",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "name": "ceph_lv0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "tags": {
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.cluster_name": "ceph",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.crush_device_class": "",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.encrypted": "0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.osd_id": "0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.type": "block",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.vdo": "0",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:                "ceph.with_tpm": "0"
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            },
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "type": "block",
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:            "vg_name": "ceph_vg0"
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:        }
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]:    ]
Jan 21 11:43:47 np0005590810 intelligent_hellman[273719]: }
Jan 21 11:43:47 np0005590810 systemd[1]: libpod-41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8.scope: Deactivated successfully.
Jan 21 11:43:47 np0005590810 podman[273703]: 2026-01-21 16:43:47.314983096 +0000 UTC m=+0.468339324 container died 41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:43:47 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9d9cc8314dff4d2e12051d8b985c3e5f8caa22c74ad39af2d4fd908031b3b8aa-merged.mount: Deactivated successfully.
Jan 21 11:43:47 np0005590810 nova_compute[251104]: 2026-01-21 16:43:47.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:47 np0005590810 nova_compute[251104]: 2026-01-21 16:43:47.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:43:47 np0005590810 nova_compute[251104]: 2026-01-21 16:43:47.370 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:43:47 np0005590810 nova_compute[251104]: 2026-01-21 16:43:47.388 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:43:47 np0005590810 podman[273703]: 2026-01-21 16:43:47.392350905 +0000 UTC m=+0.545707083 container remove 41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:43:47 np0005590810 systemd[1]: libpod-conmon-41e41c1848852747eb16c154131b534406f934fd8d7e3bc110184f69587693c8.scope: Deactivated successfully.
Jan 21 11:43:48 np0005590810 podman[273833]: 2026-01-21 16:43:48.000633498 +0000 UTC m=+0.043278863 container create 6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:43:48 np0005590810 systemd[1]: Started libpod-conmon-6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8.scope.
Jan 21 11:43:48 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:43:48 np0005590810 podman[273833]: 2026-01-21 16:43:48.069544425 +0000 UTC m=+0.112189810 container init 6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_agnesi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:43:48 np0005590810 podman[273833]: 2026-01-21 16:43:47.982097423 +0000 UTC m=+0.024742778 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:43:48 np0005590810 podman[273833]: 2026-01-21 16:43:48.078368718 +0000 UTC m=+0.121014073 container start 6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_agnesi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:43:48 np0005590810 podman[273833]: 2026-01-21 16:43:48.082211037 +0000 UTC m=+0.124856422 container attach 6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_agnesi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 11:43:48 np0005590810 dazzling_agnesi[273849]: 167 167
Jan 21 11:43:48 np0005590810 systemd[1]: libpod-6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8.scope: Deactivated successfully.
Jan 21 11:43:48 np0005590810 podman[273833]: 2026-01-21 16:43:48.083509288 +0000 UTC m=+0.126154643 container died 6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:43:48 np0005590810 systemd[1]: var-lib-containers-storage-overlay-925d2632da0e2f97e17381be72f6318e891d0000d4426d45c69bf0bbd26e22ca-merged.mount: Deactivated successfully.
Jan 21 11:43:48 np0005590810 podman[273833]: 2026-01-21 16:43:48.11874592 +0000 UTC m=+0.161391275 container remove 6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:43:48 np0005590810 systemd[1]: libpod-conmon-6f38d58ab80ebe3ad6e2587aabda7118d159631dcb61538c01b3fa14b420aad8.scope: Deactivated successfully.
Jan 21 11:43:48 np0005590810 podman[273874]: 2026-01-21 16:43:48.301529659 +0000 UTC m=+0.049390814 container create 15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 11:43:48 np0005590810 systemd[1]: Started libpod-conmon-15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3.scope.
Jan 21 11:43:48 np0005590810 nova_compute[251104]: 2026-01-21 16:43:48.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:48 np0005590810 podman[273874]: 2026-01-21 16:43:48.280898168 +0000 UTC m=+0.028759343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:43:48 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:43:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a2c70aa7f969a39bfe080dc0ebc1dacf7673b439bc8a7c6b2063d382a32fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a2c70aa7f969a39bfe080dc0ebc1dacf7673b439bc8a7c6b2063d382a32fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a2c70aa7f969a39bfe080dc0ebc1dacf7673b439bc8a7c6b2063d382a32fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:48 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a2c70aa7f969a39bfe080dc0ebc1dacf7673b439bc8a7c6b2063d382a32fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:43:48 np0005590810 nova_compute[251104]: 2026-01-21 16:43:48.395 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:48 np0005590810 podman[273874]: 2026-01-21 16:43:48.399849988 +0000 UTC m=+0.147711163 container init 15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 21 11:43:48 np0005590810 podman[273874]: 2026-01-21 16:43:48.406428871 +0000 UTC m=+0.154290026 container start 15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:43:48 np0005590810 podman[273874]: 2026-01-21 16:43:48.41960496 +0000 UTC m=+0.167466145 container attach 15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dhawan, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:43:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 21 11:43:48 np0005590810 nova_compute[251104]: 2026-01-21 16:43:48.437 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:48 np0005590810 nova_compute[251104]: 2026-01-21 16:43:48.437 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:48 np0005590810 nova_compute[251104]: 2026-01-21 16:43:48.437 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:48 np0005590810 nova_compute[251104]: 2026-01-21 16:43:48.438 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:43:48 np0005590810 nova_compute[251104]: 2026-01-21 16:43:48.438 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:43:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:43:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:43:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:48.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:43:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:43:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011856721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:43:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:48.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:48 np0005590810 nova_compute[251104]: 2026-01-21 16:43:48.975 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:43:49 np0005590810 lvm[273989]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:43:49 np0005590810 lvm[273989]: VG ceph_vg0 finished
Jan 21 11:43:49 np0005590810 elegant_dhawan[273891]: {}
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.191 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.193 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4493MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.194 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.194 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:43:49 np0005590810 systemd[1]: libpod-15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3.scope: Deactivated successfully.
Jan 21 11:43:49 np0005590810 systemd[1]: libpod-15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3.scope: Consumed 1.319s CPU time.
Jan 21 11:43:49 np0005590810 podman[273874]: 2026-01-21 16:43:49.2213306 +0000 UTC m=+0.969191775 container died 15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dhawan, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:43:49 np0005590810 systemd[1]: var-lib-containers-storage-overlay-c1a2c70aa7f969a39bfe080dc0ebc1dacf7673b439bc8a7c6b2063d382a32fd4-merged.mount: Deactivated successfully.
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.303 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.303 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.328 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Refreshing inventories for resource provider 2519faba-4002-49a2-b483-5098e748d2b5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 21 11:43:49 np0005590810 podman[273874]: 2026-01-21 16:43:49.32965383 +0000 UTC m=+1.077514985 container remove 15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dhawan, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:43:49 np0005590810 systemd[1]: libpod-conmon-15fe9ee71054fb05b2458f0e61b611ed47ea7c2d2b3ae686d0e93eb4277fa3d3.scope: Deactivated successfully.
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.355 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Updating ProviderTree inventory for provider 2519faba-4002-49a2-b483-5098e748d2b5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.356 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Updating inventory in ProviderTree for provider 2519faba-4002-49a2-b483-5098e748d2b5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.385 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Refreshing aggregate associations for resource provider 2519faba-4002-49a2-b483-5098e748d2b5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 21 11:43:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:43:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:43:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.417 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Refreshing trait associations for resource provider 2519faba-4002-49a2-b483-5098e748d2b5, traits: COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 21 11:43:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.447 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:43:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:43:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452836056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.960 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.967 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:43:49 np0005590810 nova_compute[251104]: 2026-01-21 16:43:49.981 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:43:50 np0005590810 nova_compute[251104]: 2026-01-21 16:43:50.006 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:43:50 np0005590810 nova_compute[251104]: 2026-01-21 16:43:50.006 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:43:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:43:50 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:43:50 np0005590810 nova_compute[251104]: 2026-01-21 16:43:50.411 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 21 11:43:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:50.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:50.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:50 np0005590810 nova_compute[251104]: 2026-01-21 16:43:50.980 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:50 np0005590810 nova_compute[251104]: 2026-01-21 16:43:50.981 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:43:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:51 np0005590810 nova_compute[251104]: 2026-01-21 16:43:51.995 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:52 np0005590810 nova_compute[251104]: 2026-01-21 16:43:52.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:43:52 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 83 op/s
Jan 21 11:43:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:52.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:52.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:43:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:43:54 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 83 op/s
Jan 21 11:43:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:54.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:54.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:55 np0005590810 nova_compute[251104]: 2026-01-21 16:43:55.413 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:55 np0005590810 ovn_controller[152632]: 2026-01-21T16:43:55Z|00087|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 21 11:43:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:55] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:43:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:43:55] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:43:56 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 109 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Jan 21 11:43:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:43:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:56.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:56.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:43:56 np0005590810 nova_compute[251104]: 2026-01-21 16:43:56.998 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:43:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:57.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:43:58 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 109 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Jan 21 11:43:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:43:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:43:58.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:43:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:43:58.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:43:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:43:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:43:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:43:58.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:00 np0005590810 nova_compute[251104]: 2026-01-21 16:44:00.415 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:00 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 21 11:44:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:00.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:00.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:02 np0005590810 nova_compute[251104]: 2026-01-21 16:44:02.001 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:02 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 21 11:44:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:02.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:04 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 21 11:44:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:04.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:04.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:05 np0005590810 nova_compute[251104]: 2026-01-21 16:44:05.419 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:05] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:44:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:05] "GET /metrics HTTP/1.1" 200 48675 "" "Prometheus/2.51.0"
Jan 21 11:44:06 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 21 11:44:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:06.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:06.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:07 np0005590810 nova_compute[251104]: 2026-01-21 16:44:07.004 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:07.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:44:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:07.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:08 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 106 KiB/s wr, 24 op/s
Jan 21 11:44:08 np0005590810 podman[274095]: 2026-01-21 16:44:08.683567957 +0000 UTC m=+0.062052059 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 11:44:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:08.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:08.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:08.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:44:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:44:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:44:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:44:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:44:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:44:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:44:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:44:10 np0005590810 nova_compute[251104]: 2026-01-21 16:44:10.421 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:10 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 107 KiB/s wr, 52 op/s
Jan 21 11:44:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:10.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:10.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:11 np0005590810 podman[274120]: 2026-01-21 16:44:11.727469301 +0000 UTC m=+0.100433398 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:44:12 np0005590810 nova_compute[251104]: 2026-01-21 16:44:12.006 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:12 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 21 11:44:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:12.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:12.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:13 np0005590810 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 21 11:44:14 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 21 11:44:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:14.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:15 np0005590810 nova_compute[251104]: 2026-01-21 16:44:15.423 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:15] "GET /metrics HTTP/1.1" 200 48673 "" "Prometheus/2.51.0"
Jan 21 11:44:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:15] "GET /metrics HTTP/1.1" 200 48673 "" "Prometheus/2.51.0"
Jan 21 11:44:16 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 21 11:44:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:16.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:16.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:17 np0005590810 nova_compute[251104]: 2026-01-21 16:44:17.010 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:17.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:18 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 21 11:44:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:18.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:18.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:44:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:18.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:44:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:19.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 11:44:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246397406' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 11:44:19 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 11:44:19 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246397406' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 11:44:20 np0005590810 nova_compute[251104]: 2026-01-21 16:44:20.425 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:20 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 21 11:44:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:20.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:21.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:22 np0005590810 nova_compute[251104]: 2026-01-21 16:44:22.014 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:44:22.036 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:44:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:44:22.037 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:44:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:44:22.038 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:44:22 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:22.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:23.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:44:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:44:24 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:24.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:25.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:25 np0005590810 nova_compute[251104]: 2026-01-21 16:44:25.427 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:25] "GET /metrics HTTP/1.1" 200 48659 "" "Prometheus/2.51.0"
Jan 21 11:44:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:25] "GET /metrics HTTP/1.1" 200 48659 "" "Prometheus/2.51.0"
Jan 21 11:44:26 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:44:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:26.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:27.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:27 np0005590810 nova_compute[251104]: 2026-01-21 16:44:27.016 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:27.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:28 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:28.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:29.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:30 np0005590810 nova_compute[251104]: 2026-01-21 16:44:30.428 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:30 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:44:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:30.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:44:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:31.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:44:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:32 np0005590810 nova_compute[251104]: 2026-01-21 16:44:32.019 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:32 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:32.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:33.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:34 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:44:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:34.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:44:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:35.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:35 np0005590810 nova_compute[251104]: 2026-01-21 16:44:35.429 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:35] "GET /metrics HTTP/1.1" 200 48659 "" "Prometheus/2.51.0"
Jan 21 11:44:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:35] "GET /metrics HTTP/1.1" 200 48659 "" "Prometheus/2.51.0"
Jan 21 11:44:36 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:44:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:36.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:37 np0005590810 nova_compute[251104]: 2026-01-21 16:44:37.022 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:37.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:37.228Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:44:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:37.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:38 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:38.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:39.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:44:39
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr', '.nfs', '.rgw.root', 'default.rgw.meta']
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:44:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:44:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:44:39 np0005590810 nova_compute[251104]: 2026-01-21 16:44:39.362 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:39 np0005590810 podman[274201]: 2026-01-21 16:44:39.673211791 +0000 UTC m=+0.049682572 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:44:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:44:40 np0005590810 nova_compute[251104]: 2026-01-21 16:44:40.433 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:40 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:44:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:40.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:41.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:44:41 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3034 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2058 writes, 6494 keys, 2058 commit groups, 1.0 writes per commit group, ingest: 6.25 MB, 0.01 MB/s#012Interval WAL: 2058 writes, 905 syncs, 2.27 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 11:44:42 np0005590810 nova_compute[251104]: 2026-01-21 16:44:42.025 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:42 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:42 np0005590810 podman[274222]: 2026-01-21 16:44:42.71120933 +0000 UTC m=+0.092199680 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 11:44:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:42.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:43 np0005590810 nova_compute[251104]: 2026-01-21 16:44:43.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:44 np0005590810 nova_compute[251104]: 2026-01-21 16:44:44.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:44 np0005590810 nova_compute[251104]: 2026-01-21 16:44:44.370 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:44 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:45.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:45 np0005590810 nova_compute[251104]: 2026-01-21 16:44:45.433 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:45] "GET /metrics HTTP/1.1" 200 48659 "" "Prometheus/2.51.0"
Jan 21 11:44:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:45] "GET /metrics HTTP/1.1" 200 48659 "" "Prometheus/2.51.0"
Jan 21 11:44:46 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:44:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:44:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:46.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:44:47 np0005590810 nova_compute[251104]: 2026-01-21 16:44:47.028 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:47.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:47.229Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:47 np0005590810 nova_compute[251104]: 2026-01-21 16:44:47.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:48 np0005590810 nova_compute[251104]: 2026-01-21 16:44:48.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:48 np0005590810 nova_compute[251104]: 2026-01-21 16:44:48.388 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:44:48 np0005590810 nova_compute[251104]: 2026-01-21 16:44:48.389 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:44:48 np0005590810 nova_compute[251104]: 2026-01-21 16:44:48.389 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:44:48 np0005590810 nova_compute[251104]: 2026-01-21 16:44:48.389 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:44:48 np0005590810 nova_compute[251104]: 2026-01-21 16:44:48.389 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:44:48 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:44:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:44:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769740123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:44:48 np0005590810 nova_compute[251104]: 2026-01-21 16:44:48.858 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:44:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:48.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:48.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.034 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.036 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4582MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.036 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.036 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:44:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:49.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.103 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.103 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.118 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:44:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:44:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3418609239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.575 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.582 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.644 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.646 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:44:49 np0005590810 nova_compute[251104]: 2026-01-21 16:44:49.647 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:44:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:44:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:44:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:50 np0005590810 nova_compute[251104]: 2026-01-21 16:44:50.436 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:50 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:44:50 np0005590810 nova_compute[251104]: 2026-01-21 16:44:50.648 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:50 np0005590810 nova_compute[251104]: 2026-01-21 16:44:50.648 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:44:50 np0005590810 nova_compute[251104]: 2026-01-21 16:44:50.648 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:44:50 np0005590810 nova_compute[251104]: 2026-01-21 16:44:50.666 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:44:50 np0005590810 nova_compute[251104]: 2026-01-21 16:44:50.667 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:50 np0005590810 nova_compute[251104]: 2026-01-21 16:44:50.667 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:44:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:51.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:44:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 11:44:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:51 np0005590810 podman[274570]: 2026-01-21 16:44:51.652760852 +0000 UTC m=+0.038115292 container create 8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:44:51 np0005590810 systemd[1]: Started libpod-conmon-8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007.scope.
Jan 21 11:44:51 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:44:51 np0005590810 podman[274570]: 2026-01-21 16:44:51.633301514 +0000 UTC m=+0.018655984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:44:51 np0005590810 podman[274570]: 2026-01-21 16:44:51.740138121 +0000 UTC m=+0.125492601 container init 8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_black, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:44:51 np0005590810 podman[274570]: 2026-01-21 16:44:51.749179603 +0000 UTC m=+0.134534053 container start 8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_black, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 21 11:44:51 np0005590810 podman[274570]: 2026-01-21 16:44:51.753940742 +0000 UTC m=+0.139295192 container attach 8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:44:51 np0005590810 hardcore_black[274586]: 167 167
Jan 21 11:44:51 np0005590810 systemd[1]: libpod-8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007.scope: Deactivated successfully.
Jan 21 11:44:51 np0005590810 podman[274570]: 2026-01-21 16:44:51.757026549 +0000 UTC m=+0.142381029 container died 8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:44:51 np0005590810 systemd[1]: var-lib-containers-storage-overlay-41f805c97127acb0ca56d4d53ac6c0d39884a8f163f947422f456203c25a1822-merged.mount: Deactivated successfully.
Jan 21 11:44:51 np0005590810 podman[274570]: 2026-01-21 16:44:51.882557529 +0000 UTC m=+0.267911979 container remove 8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_black, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:44:51 np0005590810 systemd[1]: libpod-conmon-8253fc2bba36cddcc5644dcb185875e75820a47ca66812685ba373ba0eb0f007.scope: Deactivated successfully.
Jan 21 11:44:52 np0005590810 nova_compute[251104]: 2026-01-21 16:44:52.032 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:52 np0005590810 podman[274613]: 2026-01-21 16:44:52.045479177 +0000 UTC m=+0.040661181 container create 7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 21 11:44:52 np0005590810 systemd[1]: Started libpod-conmon-7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5.scope.
Jan 21 11:44:52 np0005590810 podman[274613]: 2026-01-21 16:44:52.027462274 +0000 UTC m=+0.022644298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:44:52 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:44:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f913da7580d0cb45faf60ed0c7049aae0a620af1f2fae30d4b2f8b1fd9ef767b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f913da7580d0cb45faf60ed0c7049aae0a620af1f2fae30d4b2f8b1fd9ef767b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f913da7580d0cb45faf60ed0c7049aae0a620af1f2fae30d4b2f8b1fd9ef767b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f913da7580d0cb45faf60ed0c7049aae0a620af1f2fae30d4b2f8b1fd9ef767b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:52 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f913da7580d0cb45faf60ed0c7049aae0a620af1f2fae30d4b2f8b1fd9ef767b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:52 np0005590810 podman[274613]: 2026-01-21 16:44:52.146971946 +0000 UTC m=+0.142153980 container init 7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:44:52 np0005590810 podman[274613]: 2026-01-21 16:44:52.158741314 +0000 UTC m=+0.153923318 container start 7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lichterman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 11:44:52 np0005590810 podman[274613]: 2026-01-21 16:44:52.163955277 +0000 UTC m=+0.159137301 container attach 7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 21 11:44:52 np0005590810 serene_lichterman[274630]: --> passed data devices: 0 physical, 1 LVM
Jan 21 11:44:52 np0005590810 serene_lichterman[274630]: --> All data devices are unavailable
Jan 21 11:44:52 np0005590810 systemd[1]: libpod-7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5.scope: Deactivated successfully.
Jan 21 11:44:52 np0005590810 podman[274613]: 2026-01-21 16:44:52.517038204 +0000 UTC m=+0.512220208 container died 7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 11:44:52 np0005590810 systemd[1]: var-lib-containers-storage-overlay-f913da7580d0cb45faf60ed0c7049aae0a620af1f2fae30d4b2f8b1fd9ef767b-merged.mount: Deactivated successfully.
Jan 21 11:44:52 np0005590810 podman[274613]: 2026-01-21 16:44:52.570440512 +0000 UTC m=+0.565622516 container remove 7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lichterman, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 11:44:52 np0005590810 systemd[1]: libpod-conmon-7aa5ebdb72a81bde2bc9e9c9caa5cfdd3fef925669fc5bbabb421f159e87c9e5.scope: Deactivated successfully.
Jan 21 11:44:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:52.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:53.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 21 11:44:53 np0005590810 podman[274749]: 2026-01-21 16:44:53.133699513 +0000 UTC m=+0.041318141 container create 6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 11:44:53 np0005590810 systemd[1]: Started libpod-conmon-6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9.scope.
Jan 21 11:44:53 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:44:53 np0005590810 podman[274749]: 2026-01-21 16:44:53.114751961 +0000 UTC m=+0.022370619 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:44:53 np0005590810 podman[274749]: 2026-01-21 16:44:53.211698989 +0000 UTC m=+0.119317637 container init 6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:44:53 np0005590810 podman[274749]: 2026-01-21 16:44:53.217659155 +0000 UTC m=+0.125277783 container start 6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_payne, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:44:53 np0005590810 podman[274749]: 2026-01-21 16:44:53.221108783 +0000 UTC m=+0.128727431 container attach 6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 21 11:44:53 np0005590810 gallant_payne[274765]: 167 167
Jan 21 11:44:53 np0005590810 systemd[1]: libpod-6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9.scope: Deactivated successfully.
Jan 21 11:44:53 np0005590810 podman[274749]: 2026-01-21 16:44:53.223311202 +0000 UTC m=+0.130929830 container died 6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_payne, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 11:44:53 np0005590810 systemd[1]: var-lib-containers-storage-overlay-cd68cf59fa1c28636a08527726fcf92479902ece1033f79a565a82b5cb82ff22-merged.mount: Deactivated successfully.
Jan 21 11:44:53 np0005590810 podman[274749]: 2026-01-21 16:44:53.269088611 +0000 UTC m=+0.176707249 container remove 6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 21 11:44:53 np0005590810 systemd[1]: libpod-conmon-6770f2754f40b3ae7c2680bb9e5f8401af9387d7bfde2479fc65bc7493a95dc9.scope: Deactivated successfully.
Jan 21 11:44:53 np0005590810 nova_compute[251104]: 2026-01-21 16:44:53.369 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:44:53 np0005590810 podman[274790]: 2026-01-21 16:44:53.443793837 +0000 UTC m=+0.051292073 container create acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_hawking, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:44:53 np0005590810 systemd[1]: Started libpod-conmon-acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157.scope.
Jan 21 11:44:53 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:44:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4322baf3405152286546319c46f72c5d3a0731950d2c1802f10fb6be5df37a97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:53 np0005590810 podman[274790]: 2026-01-21 16:44:53.420179419 +0000 UTC m=+0.027677675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:44:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4322baf3405152286546319c46f72c5d3a0731950d2c1802f10fb6be5df37a97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4322baf3405152286546319c46f72c5d3a0731950d2c1802f10fb6be5df37a97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:53 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4322baf3405152286546319c46f72c5d3a0731950d2c1802f10fb6be5df37a97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:53 np0005590810 podman[274790]: 2026-01-21 16:44:53.529725531 +0000 UTC m=+0.137223797 container init acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Jan 21 11:44:53 np0005590810 podman[274790]: 2026-01-21 16:44:53.541093236 +0000 UTC m=+0.148591472 container start acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 21 11:44:53 np0005590810 podman[274790]: 2026-01-21 16:44:53.547644271 +0000 UTC m=+0.155142507 container attach acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_hawking, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]: {
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:    "0": [
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:        {
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "devices": [
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "/dev/loop3"
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            ],
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "lv_name": "ceph_lv0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "lv_size": "21470642176",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d9745984-fea8-5195-8ec5-61f685b5c785,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=63a44247-c214-4217-a027-13e89fae6b3d,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "lv_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "name": "ceph_lv0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "tags": {
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.block_uuid": "Y5gxe2-3BHR-MGNH-bD4l-hZbn-d8cF-5QWXUA",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.cephx_lockbox_secret": "",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.cluster_fsid": "d9745984-fea8-5195-8ec5-61f685b5c785",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.cluster_name": "ceph",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.crush_device_class": "",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.encrypted": "0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.osd_fsid": "63a44247-c214-4217-a027-13e89fae6b3d",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.osd_id": "0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.type": "block",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.vdo": "0",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:                "ceph.with_tpm": "0"
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            },
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "type": "block",
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:            "vg_name": "ceph_vg0"
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:        }
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]:    ]
Jan 21 11:44:53 np0005590810 stoic_hawking[274807]: }
Jan 21 11:44:53 np0005590810 systemd[1]: libpod-acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157.scope: Deactivated successfully.
Jan 21 11:44:53 np0005590810 podman[274790]: 2026-01-21 16:44:53.865858518 +0000 UTC m=+0.473356784 container died acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_hawking, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 11:44:53 np0005590810 systemd[1]: var-lib-containers-storage-overlay-4322baf3405152286546319c46f72c5d3a0731950d2c1802f10fb6be5df37a97-merged.mount: Deactivated successfully.
Jan 21 11:44:53 np0005590810 podman[274790]: 2026-01-21 16:44:53.920498955 +0000 UTC m=+0.527997191 container remove acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 21 11:44:53 np0005590810 systemd[1]: libpod-conmon-acd8e487ed0bdd309e26f8a6043bac207fbe81e4a34fc1b83dabd824c2680157.scope: Deactivated successfully.
Jan 21 11:44:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:44:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:44:54 np0005590810 podman[274920]: 2026-01-21 16:44:54.515354263 +0000 UTC m=+0.038730451 container create 8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 11:44:54 np0005590810 systemd[1]: Started libpod-conmon-8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66.scope.
Jan 21 11:44:54 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:44:54 np0005590810 podman[274920]: 2026-01-21 16:44:54.581950662 +0000 UTC m=+0.105326890 container init 8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_swartz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 11:44:54 np0005590810 podman[274920]: 2026-01-21 16:44:54.589686405 +0000 UTC m=+0.113062593 container start 8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_swartz, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:44:54 np0005590810 podman[274920]: 2026-01-21 16:44:54.593667488 +0000 UTC m=+0.117043696 container attach 8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 21 11:44:54 np0005590810 busy_swartz[274937]: 167 167
Jan 21 11:44:54 np0005590810 podman[274920]: 2026-01-21 16:44:54.500400126 +0000 UTC m=+0.023776334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:44:54 np0005590810 systemd[1]: libpod-8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66.scope: Deactivated successfully.
Jan 21 11:44:54 np0005590810 podman[274920]: 2026-01-21 16:44:54.595728904 +0000 UTC m=+0.119105092 container died 8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 11:44:54 np0005590810 systemd[1]: var-lib-containers-storage-overlay-524c30df957960b36bdbab4a81e2c3ea1205b4b9e6402c93a4486da4e3c0273e-merged.mount: Deactivated successfully.
Jan 21 11:44:54 np0005590810 podman[274920]: 2026-01-21 16:44:54.63437972 +0000 UTC m=+0.157755918 container remove 8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Jan 21 11:44:54 np0005590810 systemd[1]: libpod-conmon-8bc7b73e0569a45fd7bd3c7b114ffa26c9f8ccd40e64b84cb94ff9bfa8a6af66.scope: Deactivated successfully.
Jan 21 11:44:54 np0005590810 podman[274960]: 2026-01-21 16:44:54.816471587 +0000 UTC m=+0.043259902 container create f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_spence, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 21 11:44:54 np0005590810 systemd[1]: Started libpod-conmon-f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be.scope.
Jan 21 11:44:54 np0005590810 systemd[1]: Started libcrun container.
Jan 21 11:44:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:54.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:54 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9985251b57afbb3b5add2001a39b47e6abeef9343ac546302270c7085fb511e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:54 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9985251b57afbb3b5add2001a39b47e6abeef9343ac546302270c7085fb511e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:54 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9985251b57afbb3b5add2001a39b47e6abeef9343ac546302270c7085fb511e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:54 np0005590810 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9985251b57afbb3b5add2001a39b47e6abeef9343ac546302270c7085fb511e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 11:44:54 np0005590810 podman[274960]: 2026-01-21 16:44:54.797143614 +0000 UTC m=+0.023931979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 21 11:44:54 np0005590810 podman[274960]: 2026-01-21 16:44:54.900586744 +0000 UTC m=+0.127375079 container init f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 11:44:54 np0005590810 podman[274960]: 2026-01-21 16:44:54.913620861 +0000 UTC m=+0.140409196 container start f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_spence, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 21 11:44:54 np0005590810 podman[274960]: 2026-01-21 16:44:54.917495513 +0000 UTC m=+0.144283858 container attach f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_spence, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 21 11:44:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:55.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 21 11:44:55 np0005590810 nova_compute[251104]: 2026-01-21 16:44:55.437 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:55 np0005590810 lvm[275053]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:44:55 np0005590810 lvm[275053]: VG ceph_vg0 finished
Jan 21 11:44:55 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:55] "GET /metrics HTTP/1.1" 200 48660 "" "Prometheus/2.51.0"
Jan 21 11:44:55 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:44:55] "GET /metrics HTTP/1.1" 200 48660 "" "Prometheus/2.51.0"
Jan 21 11:44:55 np0005590810 nervous_spence[274977]: {}
Jan 21 11:44:55 np0005590810 systemd[1]: libpod-f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be.scope: Deactivated successfully.
Jan 21 11:44:55 np0005590810 podman[274960]: 2026-01-21 16:44:55.68590136 +0000 UTC m=+0.912689685 container died f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 11:44:55 np0005590810 systemd[1]: libpod-f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be.scope: Consumed 1.238s CPU time.
Jan 21 11:44:55 np0005590810 systemd[1]: var-lib-containers-storage-overlay-9985251b57afbb3b5add2001a39b47e6abeef9343ac546302270c7085fb511e7-merged.mount: Deactivated successfully.
Jan 21 11:44:55 np0005590810 podman[274960]: 2026-01-21 16:44:55.736722097 +0000 UTC m=+0.963510422 container remove f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 11:44:55 np0005590810 systemd[1]: libpod-conmon-f7712526b5d25ed5c29c714c9a2cd95ac8716f641b6f645e8a148d4bca80c3be.scope: Deactivated successfully.
Jan 21 11:44:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 11:44:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:55 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 11:44:55 np0005590810 ceph-mon[74380]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:56 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:56 np0005590810 ceph-mon[74380]: from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' 
Jan 21 11:44:56 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:44:56 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:56 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:56 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:56.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:57 np0005590810 nova_compute[251104]: 2026-01-21 16:44:57.036 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:44:57 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:57 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:57 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:57.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:57 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 21 11:44:57 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:57.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:58 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:44:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:44:58 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:58 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:44:58 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:44:58.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:44:59 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:44:59 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:44:59 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:44:59.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:44:59 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 21 11:45:00 np0005590810 nova_compute[251104]: 2026-01-21 16:45:00.440 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:00 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:00 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:00 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:01 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:01 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:01 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:01.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:01 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 21 11:45:01 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:02 np0005590810 nova_compute[251104]: 2026-01-21 16:45:02.040 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:02 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:02 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:02 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:02.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:03 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:03 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:03 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:03.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:03 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:04 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:04 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:04 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:05 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:05 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:05 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:05.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:05 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:05 np0005590810 nova_compute[251104]: 2026-01-21 16:45:05.442 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:05 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:05] "GET /metrics HTTP/1.1" 200 48660 "" "Prometheus/2.51.0"
Jan 21 11:45:05 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:05] "GET /metrics HTTP/1.1" 200 48660 "" "Prometheus/2.51.0"
Jan 21 11:45:06 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:06 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:06 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:06 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:06.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:07 np0005590810 nova_compute[251104]: 2026-01-21 16:45:07.043 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:07 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:07 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:07 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:07.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:07 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:07 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:07.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:45:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:08.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:45:08 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:08.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:45:08 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:08 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:08 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:09 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:09 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:09 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:09.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:09 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:09 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:45:09 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:45:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:45:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:45:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:45:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:45:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:45:09 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:45:10 np0005590810 nova_compute[251104]: 2026-01-21 16:45:10.445 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:10 np0005590810 podman[275136]: 2026-01-21 16:45:10.694090498 +0000 UTC m=+0.064294259 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:45:10 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:10 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:10 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:11 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:11 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:11 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:11.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:11 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:11 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:12 np0005590810 nova_compute[251104]: 2026-01-21 16:45:12.046 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:12 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:12 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:12 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:12.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:13 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:13 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:13 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:13.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:13 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:13 np0005590810 podman[275161]: 2026-01-21 16:45:13.712056232 +0000 UTC m=+0.089591609 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller)
Jan 21 11:45:14 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:14 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:14 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:14.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:15 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:15 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:15 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:15.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:15 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:15 np0005590810 nova_compute[251104]: 2026-01-21 16:45:15.446 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:15 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:15] "GET /metrics HTTP/1.1" 200 48656 "" "Prometheus/2.51.0"
Jan 21 11:45:15 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:15] "GET /metrics HTTP/1.1" 200 48656 "" "Prometheus/2.51.0"
Jan 21 11:45:16 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:16 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:16 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:16 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:16.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:17 np0005590810 nova_compute[251104]: 2026-01-21 16:45:17.049 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:17 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:17 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:17 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:17 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:17 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:17.232Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:45:18 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:18.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:45:18 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:18 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:18 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:18.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:19 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:19 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:45:19 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:19.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:45:19 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:20 np0005590810 nova_compute[251104]: 2026-01-21 16:45:20.450 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:20 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:20 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:20 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:21 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:21 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:21 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:21.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:21 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:21 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:45:22.037 163593 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:45:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:45:22.038 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:45:22 np0005590810 ovn_metadata_agent[163588]: 2026-01-21 16:45:22.038 163593 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:45:22 np0005590810 nova_compute[251104]: 2026-01-21 16:45:22.053 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:22 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:22 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:45:22 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:22.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:45:23 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:23 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:23 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:23.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:23 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:24 np0005590810 systemd-logind[795]: New session 56 of user zuul.
Jan 21 11:45:24 np0005590810 systemd[1]: Started Session 56 of User zuul.
Jan 21 11:45:24 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:45:24 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:45:24 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:24 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:24 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:24.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:25 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:25 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:25 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:25.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:25 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:25 np0005590810 nova_compute[251104]: 2026-01-21 16:45:25.450 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:25 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:25] "GET /metrics HTTP/1.1" 200 48660 "" "Prometheus/2.51.0"
Jan 21 11:45:25 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:25] "GET /metrics HTTP/1.1" 200 48660 "" "Prometheus/2.51.0"
Jan 21 11:45:26 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:26 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16212 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:26 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.25936 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:26 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:26 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:26 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:26.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:27 np0005590810 nova_compute[251104]: 2026-01-21 16:45:27.055 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:27 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:27 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:45:27 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:27.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:45:27 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:27 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35705 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:27 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:27.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:45:27 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16221 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:27 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.25945 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:27 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35711 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:27 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 21 11:45:27 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76703789' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 21 11:45:28 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:28.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:45:28 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:28 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:28 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:28.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:29 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:29 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:29 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:29.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:29 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:30 np0005590810 nova_compute[251104]: 2026-01-21 16:45:30.451 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:30 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:30 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:30 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:31 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:31 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:31 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:31.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:31 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:31 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:32 np0005590810 nova_compute[251104]: 2026-01-21 16:45:32.059 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:32 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:32 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:45:32 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:32.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:45:33 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:33 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:33 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:33 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:33.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:33 np0005590810 ovs-vsctl[275609]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 21 11:45:33 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35723 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:34 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 21 11:45:34 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 21 11:45:34 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35735 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:34 np0005590810 virtqemud[250664]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 21 11:45:34 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35744 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:34 np0005590810 virtqemud[250664]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 21 11:45:34 np0005590810 virtqemud[250664]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 21 11:45:34 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:34 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:34 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:34.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:35 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:35 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:35 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:35 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:35.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:35 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35756 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:35 np0005590810 nova_compute[251104]: 2026-01-21 16:45:35.453 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:35 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: cache status {prefix=cache status} (starting...)
Jan 21 11:45:35 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:35] "GET /metrics HTTP/1.1" 200 48660 "" "Prometheus/2.51.0"
Jan 21 11:45:35 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:35] "GET /metrics HTTP/1.1" 200 48660 "" "Prometheus/2.51.0"
Jan 21 11:45:35 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: client ls {prefix=client ls} (starting...)
Jan 21 11:45:35 np0005590810 lvm[275973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 11:45:35 np0005590810 lvm[275973]: VG ceph_vg0 finished
Jan 21 11:45:35 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.25963 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35774 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 21 11:45:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.25978 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16236 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: damage ls {prefix=damage ls} (starting...)
Jan 21 11:45:36 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35786 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:36 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.25990 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: dump loads {prefix=dump loads} (starting...)
Jan 21 11:45:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 21 11:45:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2333519913' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16251 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:36 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 21 11:45:36 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:36 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:36 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:36.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:36 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 21 11:45:36 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 21 11:45:36 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 21 11:45:37 np0005590810 nova_compute[251104]: 2026-01-21 16:45:37.063 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 11:45:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769319314' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 11:45:37 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26002 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:37 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 21 11:45:37 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:37 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:37 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:37 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:37.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:37 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16263 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:37 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 21 11:45:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:37.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:45:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:37.234Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:45:37 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 21 11:45:37 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 21 11:45:37 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399249070' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 21 11:45:37 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 21 11:45:37 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16275 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:37 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35825 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:37 np0005590810 ceph-mgr[74671]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 11:45:37 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:45:37.814+0000 7f897a1d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 11:45:37 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26029 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:37 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: ops {prefix=ops} (starting...)
Jan 21 11:45:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 21 11:45:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171142345' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 21 11:45:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 21 11:45:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789671462' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 21 11:45:38 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26041 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:38 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16296 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:38 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: session ls {prefix=session ls} (starting...)
Jan 21 11:45:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 21 11:45:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651436123' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 21 11:45:38 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 21 11:45:38 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 21 11:45:38 np0005590810 ceph-mds[94997]: mds.cephfs.compute-0.hjphzb asok_command: status {prefix=status} (starting...)
Jan 21 11:45:38 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35861 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:38 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:38.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:45:38 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:38 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:38 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:38.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:38 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16314 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:39 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:39 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:39 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:39.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317995409' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35876 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Optimize plan auto_2026-01-21_16:45:39
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] do_upmap
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] pools ['.nfs', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'images']
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:45:39 np0005590810 nova_compute[251104]: 2026-01-21 16:45:39.364 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35891 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1809389626' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26092 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:39 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:45:39.658+0000 7f897a1d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:45:39 np0005590810 ceph-mgr[74671]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 21 11:45:39 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2377590360' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 21 11:45:40 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35903 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:40 np0005590810 nova_compute[251104]: 2026-01-21 16:45:40.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:40 np0005590810 nova_compute[251104]: 2026-01-21 16:45:40.368 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 21 11:45:40 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16359 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:40 np0005590810 ceph-mgr[74671]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 11:45:40 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: 2026-01-21T16:45:40.396+0000 7f897a1d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 11:45:40 np0005590810 nova_compute[251104]: 2026-01-21 16:45:40.454 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:40 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35918 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 21 11:45:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1168210664' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 21 11:45:40 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26131 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:40 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 21 11:45:40 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3225709645' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 21 11:45:40 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:40 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:45:40 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:40.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:45:40 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35933 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 21 11:45:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771214992' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 21 11:45:41 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:41 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:41 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:41 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:41.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:41 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26143 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 21 11:45:41 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602239975' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 21 11:45:41 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35945 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:41 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16395 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:41 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:41 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26161 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:41 np0005590810 podman[276755]: 2026-01-21 16:45:41.729545242 +0000 UTC m=+0.102135790 container health_status 2e1ae6cb0f427d6819f3f9d63d4c06cbaf97ed4cd8fe4d348407fa5d26f737ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 21 11:45:41 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16401 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:41 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35963 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 21 11:45:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260656389' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26176 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:42 np0005590810 nova_compute[251104]: 2026-01-21 16:45:42.066 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:42 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35975 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 21 11:45:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4133032020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26194 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 319488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 319488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 319488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925284 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 311296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 311296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 303104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 303104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 294912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925284 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 294912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 286720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 286720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.641864777s of 36.647251129s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 286720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925416 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 286720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926944 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 262144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 262144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926944 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 245760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.497777939s of 16.547515869s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926644 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 229376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 229376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 229376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 221184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 221184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a75b68400 session 0x557a72b48b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926796 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c4a000 session 0x557a754a70e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926796 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 196608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 196608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926796 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.908756256s of 16.030244827s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 172032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 172032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 155648 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927076 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 155648 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 155648 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 155648 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928588 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.196969032s of 11.235408783s, submitted: 12
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a75b6b000 session 0x557a75f8f860
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927849 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 262144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 262144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927717 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 245760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 245760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.063184738s of 12.077499390s, submitted: 4
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 229376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927849 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 221184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 221184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 221184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929377 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 221184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928618 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 196608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 7419 writes, 30K keys, 7419 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7419 writes, 1308 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7419 writes, 30K keys, 7419 commit groups, 1.0 writes per commit group, ingest: 20.55 MB, 0.03 MB/s#012Interval WAL: 7419 writes, 1308 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.980578423s of 14.184622765s, submitted: 12
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 73728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 73728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928638 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 57344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 57344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928638 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 57344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 49152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74ea7800 session 0x557a75c63860
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 49152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928638 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 32768 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 32768 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 32768 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 24576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 24576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928638 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 24576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 16384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 16384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 8192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.011508942s of 21.015216827s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 8192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928770 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 8192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a741fd000 session 0x557a759d4780
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 0 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 0 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928786 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.468919754s of 11.482556343s, submitted: 4
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928654 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928195 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931219 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.460424423s of 10.681938171s, submitted: 13
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931203 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 892928 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 892928 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 868352 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 868352 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 819200 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 819200 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 794624 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 794624 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 761856 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 761856 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 753664 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a75bad000 session 0x557a75a69860
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 737280 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 737280 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 729088 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 729088 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 720896 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 720896 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931071 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 712704 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.797008514s of 60.510898590s, submitted: 6
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932731 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 663552 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 638976 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 638976 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931381 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 630784 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 622592 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 622592 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 614400 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 614400 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931533 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 598016 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.244199753s of 15.353507996s, submitted: 12
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 573440 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 573440 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 573440 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931401 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931401 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931401 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.623893738s of 17.628515244s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931401 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 114688 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 106496 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 65536 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 40960 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931473 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 40960 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 40960 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 40960 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a75b69400 session 0x557a75f41c20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 40960 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.044009209s of 10.001208305s, submitted: 174
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 24576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931401 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 24576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74202400 session 0x557a762bdc20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 16384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 16384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 16384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 16384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931401 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 16384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1040384 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74202c00 session 0x557a765b61e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931401 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.699436188s of 12.817781448s, submitted: 26
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931549 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 1032192 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 1024000 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 1024000 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931797 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 1024000 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 1015808 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.210536003s of 10.238770485s, submitted: 9
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 999424 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 958464 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931697 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 933888 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932470 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 860160 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 860160 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932322 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 860160 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 860160 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932322 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932322 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932322 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932322 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a75f2b400 session 0x557a75f8f0e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932322 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932322 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.466590881s of 47.510181427s, submitted: 13
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932454 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933982 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 1949696 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 1949696 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 1949696 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 1925120 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 1925120 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933223 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 1925120 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 1925120 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 1925120 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 1925120 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.102165222s of 15.138242722s, submitted: 12
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933243 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a75b6b400 session 0x557a75f41a40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933243 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933243 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.381120682s of 12.384155273s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934903 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936415 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935808 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.845931053s of 13.894298553s, submitted: 12
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935676 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935676 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c1e800 session 0x557a7662fa40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935676 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c1f800 session 0x557a7660b0e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935676 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.405199051s of 22.414850235s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 1851392 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 1851392 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935808 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 1851392 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 1851392 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 1851392 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 1851392 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 1851392 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935824 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 1851392 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 1810432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 1810432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 1810432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.161069870s of 11.207889557s, submitted: 8
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935956 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935217 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread fragmentation_score=0.000025 took=0.000057s
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c98c00 session 0x557a72b49e00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935085 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a756b5c00 session 0x557a75f8f4a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.473049164s of 60.509712219s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935217 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935233 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935365 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.939503670s of 12.976491928s, submitted: 8
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935365 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936729 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936729 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.689754486s of 13.727371216s, submitted: 10
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936597 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936597 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936597 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936597 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936597 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a741d8000 session 0x557a740cc960
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936597 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936597 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.820064545s of 36.824859619s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936729 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938257 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.544095993s of 11.462927818s, submitted: 10
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937957 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c4c400 session 0x557a7662fe00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937518 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.756572723s of 57.762760162s, submitted: 2
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939178 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1761280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939178 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a72fb7800 session 0x557a741f2d20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.163542747s of 12.197920799s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 1753088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938439 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938439 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.518382072s of 11.521335602s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938571 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938587 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 1744896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 1720320 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938587 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.779762268s of 11.813998222s, submitted: 10
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 1679360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 1671168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c48c00 session 0x557a73db4000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74273800 session 0x557a75e581e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 1662976 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937848 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 73.941833496s of 73.948043823s, submitted: 2
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937996 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 8129 writes, 31K keys, 8129 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8129 writes, 1655 syncs, 4.91 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 710 writes, 1243 keys, 710 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s#012Interval WAL: 710 writes, 347 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a71a4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 1622016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 1622016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 1622016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 1613824 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939640 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 1613824 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.195051193s of 10.229851723s, submitted: 9
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 1581056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 1581056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 1581056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 1548288 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939640 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 1548288 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938901 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.016671181s of 11.064276695s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a741d8800 session 0x557a763810e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1409024 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.175491333s of 38.200939178s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938901 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1409024 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1409024 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1409024 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938917 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 294912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 294912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 294912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 352256 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938917 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.444137573s of 15.500663757s, submitted: 9
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938617 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 344064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 327680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 319488 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 335872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 67.165901184s of 67.168876648s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 303104 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a741d9400 session 0x557a741f30e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 294912 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 1253376 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 1253376 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 1261568 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 1261568 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 1212416 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 1212416 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938769 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 1212416 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 1204224 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1196032 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1196032 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.185399055s of 12.287237167s, submitted: 212
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1196032 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938901 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1196032 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1196032 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 1187840 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 1187840 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 1187840 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940429 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 1187840 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1179648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1179648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1179648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1179648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941925 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1179648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1179648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 1171456 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 1171456 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.468020439s of 15.044280052s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1163264 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941793 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1163264 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1163264 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1163264 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1163264 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1163264 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941793 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1163264 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941793 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941793 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941793 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 1138688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941793 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c4c800 session 0x557a7662cd20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941793 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 1130496 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941793 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1122304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1122304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.550525665s of 38.554470062s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1122304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1122304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 1122304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941925 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 1114112 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 1114112 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 1114112 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 1105920 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 1105920 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941941 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 1105920 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1089536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1089536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1089536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.997545242s of 12.059866905s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1089536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940743 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1089536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1089536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1089536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a741fd000 session 0x557a75ee61e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1089536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940611 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c4bc00 session 0x557a75e583c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940611 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.830932617s of 14.836821556s, submitted: 2
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940743 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1056768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1024000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940723 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1024000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1024000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1007616 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 999424 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 999424 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940891 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.087175369s of 11.124676704s, submitted: 10
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 999424 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 966656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 950272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 950272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 950272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942255 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 942080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 942080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 942080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 942080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 933888 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942255 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 933888 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 933888 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.943896294s of 11.972151756s, submitted: 8
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 933888 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942123 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942123 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942123 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942123 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 917504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942123 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35984 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c4c000 session 0x557a762bd680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942123 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a741fac00 session 0x557a754ade00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942123 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 901120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.022544861s of 36.026805878s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942255 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 892928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 876544 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 876544 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 860160 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 827392 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943915 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 827392 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 827392 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1851392 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1851392 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1851392 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943915 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.190388680s of 12.225730896s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1851392 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1851392 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1851392 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 ms_handle_reset con 0x557a74c48000 session 0x557a762bc780
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1851392 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943615 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1851392 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xe9ce2/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1843200 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 18497536 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 141 ms_handle_reset con 0x557a741ff400 session 0x557a733cde00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 18481152 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 18489344 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 142 ms_handle_reset con 0x557a741fac00 session 0x557a75ee61e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096049 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb1f2000/0x0/0x4ffc00000, data 0x156005c/0x161a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 18481152 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb1ed000/0x0/0x4ffc00000, data 0x1562187/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 18481152 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 18481152 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 18481152 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 18472960 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.392205238s of 14.621644020s, submitted: 41
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098939 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18464768 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1ea000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18464768 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18464768 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 18612224 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 18612224 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099627 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 18612224 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 18612224 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 18604032 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 18604032 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 18604032 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099627 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 18604032 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 18595840 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 18595840 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.281466484s of 13.335536003s, submitted: 21
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 18587648 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 18587648 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099327 data_alloc: 218103808 data_used: 106496
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099479 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099479 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099479 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099479 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099479 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099479 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099479 data_alloc: 218103808 data_used: 110592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 ms_handle_reset con 0x557a756b5c00 session 0x557a767db4a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 ms_handle_reset con 0x557a74204800 session 0x557a75eab2c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 ms_handle_reset con 0x557a72fba400 session 0x557a741f3680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 18579456 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 ms_handle_reset con 0x557a75b66000 session 0x557a7689af00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 10698752 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 ms_handle_reset con 0x557a75b66000 session 0x557a7660a000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb1eb000/0x0/0x4ffc00000, data 0x1564159/0x1621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 ms_handle_reset con 0x557a75335800 session 0x557a762bc3c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 10682368 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.025814056s of 41.028923035s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 10657792 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122853 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 7962624 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a74c38000 session 0x557a759d4f00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a733b7800 session 0x557a75f410e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a74c48c00 session 0x557a762bc000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a74c48c00 session 0x557a75f370e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a733b7800 session 0x557a754c54a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 93388800 unmapped: 25083904 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 25034752 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a75b6b000 session 0x557a7662fe00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 25051136 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa19b000/0x0/0x4ffc00000, data 0x25af395/0x266f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 25051136 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa19b000/0x0/0x4ffc00000, data 0x25af395/0x266f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a74200000 session 0x557a75f36960
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246031 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 25051136 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a72fbb000 session 0x557a768ecb40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 93421568 unmapped: 25051136 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a733b7800 session 0x557a75da7a40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 ms_handle_reset con 0x557a74200000 session 0x557a768ec780
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 93437952 unmapped: 25034752 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 93503488 unmapped: 24969216 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa197000/0x0/0x4ffc00000, data 0x25b1367/0x2672000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 95346688 unmapped: 23126016 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336321 data_alloc: 234881024 data_used: 20004864
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 9814016 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa199000/0x0/0x4ffc00000, data 0x25b1367/0x2672000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c47800 session 0x557a75ee6d20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108707840 unmapped: 9764864 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa199000/0x0/0x4ffc00000, data 0x25b1367/0x2672000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 9732096 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 9732096 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 9732096 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362009 data_alloc: 234881024 data_used: 22978560
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 9732096 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 9732096 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa199000/0x0/0x4ffc00000, data 0x25b1367/0x2672000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108781568 unmapped: 9691136 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa199000/0x0/0x4ffc00000, data 0x25b1367/0x2672000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 9658368 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 9658368 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362009 data_alloc: 234881024 data_used: 22978560
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 9658368 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.074289322s of 21.947088242s, submitted: 36
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 1638400 heap: 119521280 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8827000/0x0/0x4ffc00000, data 0x2d7c367/0x2e3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [2])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 1417216 heap: 119521280 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 4194304 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 4194304 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438369 data_alloc: 234881024 data_used: 24674304
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 4169728 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 4153344 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8804000/0x0/0x4ffc00000, data 0x2da7367/0x2e68000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 4145152 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8804000/0x0/0x4ffc00000, data 0x2da7367/0x2e68000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 4112384 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 4112384 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436253 data_alloc: 234881024 data_used: 24670208
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 4112384 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8801000/0x0/0x4ffc00000, data 0x2daa367/0x2e6b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8801000/0x0/0x4ffc00000, data 0x2daa367/0x2e6b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437469 data_alloc: 234881024 data_used: 24748032
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.877882004s of 14.711136818s, submitted: 104
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8800000/0x0/0x4ffc00000, data 0x2dab367/0x2e6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 4063232 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437057 data_alloc: 234881024 data_used: 24748032
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 4055040 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 4055040 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8800000/0x0/0x4ffc00000, data 0x2dab367/0x2e6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 4055040 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8800000/0x0/0x4ffc00000, data 0x2dab367/0x2e6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 4055040 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 4055040 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437057 data_alloc: 234881024 data_used: 24748032
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 4055040 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 4055040 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8800000/0x0/0x4ffc00000, data 0x2dab367/0x2e6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 4046848 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 4038656 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 4038656 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75334800 session 0x557a73d3e1e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437057 data_alloc: 234881024 data_used: 24748032
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 2727936 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74200c00 session 0x557a754c43c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 2727936 heap: 120569856 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.475677490s of 16.483873367s, submitted: 2
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a733b7800 session 0x557a7338ba40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 8126464 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8252000/0x0/0x4ffc00000, data 0x3359367/0x341a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b6bc00 session 0x557a75f414a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 8126464 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8252000/0x0/0x4ffc00000, data 0x3359367/0x341a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1,1])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 8093696 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480875 data_alloc: 234881024 data_used: 25272320
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74205400 session 0x557a72b49c20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 8085504 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c46c00 session 0x557a765b61e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 8085504 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8252000/0x0/0x4ffc00000, data 0x3359367/0x341a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75f2b400 session 0x557a75da72c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118276096 unmapped: 7684096 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118276096 unmapped: 7684096 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118276096 unmapped: 7684096 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1503445 data_alloc: 251658240 data_used: 27676672
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 4431872 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f822d000/0x0/0x4ffc00000, data 0x337d377/0x343f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 4399104 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 4399104 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 4399104 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f822d000/0x0/0x4ffc00000, data 0x337d377/0x343f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 4358144 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1524877 data_alloc: 251658240 data_used: 28270592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 4358144 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 4358144 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 4440064 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 4431872 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 4431872 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1524877 data_alloc: 251658240 data_used: 28270592
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f822d000/0x0/0x4ffc00000, data 0x337d377/0x343f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 4431872 heap: 125960192 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.504955292s of 19.566396713s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 4046848 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 4046848 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 3915776 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f7746000/0x0/0x4ffc00000, data 0x3e5c377/0x3f1e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123764736 unmapped: 4612096 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1615201 data_alloc: 251658240 data_used: 28573696
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123764736 unmapped: 4612096 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123764736 unmapped: 4612096 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123764736 unmapped: 4612096 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f772b000/0x0/0x4ffc00000, data 0x3e7f377/0x3f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123764736 unmapped: 4612096 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 4497408 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1614529 data_alloc: 251658240 data_used: 28573696
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 4497408 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 4497408 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f7728000/0x0/0x4ffc00000, data 0x3e82377/0x3f44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 4489216 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a756b4400 session 0x557a754a74a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c39c00 session 0x557a73d0f0e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.051638603s of 12.254832268s, submitted: 75
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 8708096 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b6ac00 session 0x557a7662e3c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 8699904 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447570 data_alloc: 234881024 data_used: 22265856
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 8699904 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8800000/0x0/0x4ffc00000, data 0x2dab367/0x2e6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 8699904 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 8699904 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8800000/0x0/0x4ffc00000, data 0x2dab367/0x2e6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 8699904 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 8699904 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447570 data_alloc: 234881024 data_used: 22265856
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8800000/0x0/0x4ffc00000, data 0x2dab367/0x2e6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 8699904 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a7604fc00 session 0x557a75f41e00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8800000/0x0/0x4ffc00000, data 0x2dab367/0x2e6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 8699904 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75395c00 session 0x557a7689a3c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149520 data_alloc: 218103808 data_used: 7454720
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741d8400 session 0x557a73d53680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75334800 session 0x557a763814a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74272000 session 0x557a765b7c20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 20815872 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c49c00 session 0x557a75da6f00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75395800 session 0x557a750a9a40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 20979712 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a7604ec00 session 0x557a752f7c20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 20963328 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.271514893s of 59.536514282s, submitted: 26
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a72fb7800 session 0x557a75f36b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 20758528 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9eac000/0x0/0x4ffc00000, data 0x1700357/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9eac000/0x0/0x4ffc00000, data 0x1700357/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 20758528 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165842 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 20758528 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 20758528 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 20758528 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 20758528 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9eac000/0x0/0x4ffc00000, data 0x1700357/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 20758528 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165842 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 20758528 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 20750336 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 20750336 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9eac000/0x0/0x4ffc00000, data 0x1700357/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 20750336 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 20750336 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9eac000/0x0/0x4ffc00000, data 0x1700357/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169642 data_alloc: 218103808 data_used: 7458816
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 20750336 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 20750336 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 20750336 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9eac000/0x0/0x4ffc00000, data 0x1700357/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 20750336 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 20742144 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169642 data_alloc: 218103808 data_used: 7458816
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 20742144 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9eac000/0x0/0x4ffc00000, data 0x1700357/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 20742144 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9eac000/0x0/0x4ffc00000, data 0x1700357/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 20742144 heap: 128376832 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.789546967s of 20.143539429s, submitted: 7
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 24584192 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f930f000/0x0/0x4ffc00000, data 0x229d357/0x235d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253800 data_alloc: 218103808 data_used: 7495680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f930a000/0x0/0x4ffc00000, data 0x22a1357/0x2361000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253800 data_alloc: 218103808 data_used: 7495680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253800 data_alloc: 218103808 data_used: 7495680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f930a000/0x0/0x4ffc00000, data 0x22a1357/0x2361000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253800 data_alloc: 218103808 data_used: 7495680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f930a000/0x0/0x4ffc00000, data 0x22a1357/0x2361000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f930a000/0x0/0x4ffc00000, data 0x22a1357/0x2361000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f930a000/0x0/0x4ffc00000, data 0x22a1357/0x2361000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253800 data_alloc: 218103808 data_used: 7495680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f930a000/0x0/0x4ffc00000, data 0x22a1357/0x2361000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253800 data_alloc: 218103808 data_used: 7495680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f930a000/0x0/0x4ffc00000, data 0x22a1357/0x2361000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74273800 session 0x557a750a83c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.960403442s of 29.478780746s, submitted: 52
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74200000 session 0x557a740ce780
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a7604f000 session 0x557a7662e1e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155488 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155488 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.032462120s of 13.049506187s, submitted: 8
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155620 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 25616384 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155620 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155620 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 25600000 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155620 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.553961754s of 15.564026833s, submitted: 3
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b68400 session 0x557a762bd0e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 26025984 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 26025984 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 26025984 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166d357/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 26025984 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 26025984 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167904 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 26025984 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166d357/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75335800 session 0x557a75eaba40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 26017792 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 9295 writes, 35K keys, 9295 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9295 writes, 2129 syncs, 4.37 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1166 writes, 3974 keys, 1166 commit groups, 1.0 writes per commit group, ingest: 3.63 MB, 0.01 MB/s#012Interval WAL: 1166 writes, 474 syncs, 2.46 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106840064 unmapped: 26001408 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106840064 unmapped: 26001408 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106840064 unmapped: 26001408 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166d357/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169197 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166d357/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176037 data_alloc: 218103808 data_used: 7979008
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166d357/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166d357/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176037 data_alloc: 218103808 data_used: 7979008
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.846391678s of 20.941465378s, submitted: 11
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166d357/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9cc7000/0x0/0x4ffc00000, data 0x18e5357/0x19a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 25993216 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195747 data_alloc: 218103808 data_used: 8069120
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x18e9357/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195747 data_alloc: 218103808 data_used: 8069120
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x18e9357/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195747 data_alloc: 218103808 data_used: 8069120
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x18e9357/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x18e9357/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 25985024 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195747 data_alloc: 218103808 data_used: 8069120
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 25976832 heap: 132841472 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.590280533s of 19.652723312s, submitted: 15
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741d9400 session 0x557a7291d680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 29884416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9825000/0x0/0x4ffc00000, data 0x1d87357/0x1e47000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 29884416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 29884416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c1ec00 session 0x557a752f7a40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 29884416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a772bfc00 session 0x557a752f6780
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230845 data_alloc: 218103808 data_used: 8069120
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741d9400 session 0x557a752f61e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c1ec00 session 0x557a752f72c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 29900800 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 29908992 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9825000/0x0/0x4ffc00000, data 0x1d87357/0x1e47000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 29908992 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9825000/0x0/0x4ffc00000, data 0x1d87357/0x1e47000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264698 data_alloc: 234881024 data_used: 12820480
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9825000/0x0/0x4ffc00000, data 0x1d87357/0x1e47000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264698 data_alloc: 234881024 data_used: 12820480
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 27975680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 27967488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.171054840s of 17.234899521s, submitted: 9
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 24346624 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 24346624 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9348000/0x0/0x4ffc00000, data 0x225c357/0x231c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303570 data_alloc: 234881024 data_used: 12812288
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 23003136 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 23035904 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 23035904 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9323000/0x0/0x4ffc00000, data 0x2281357/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 23035904 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9323000/0x0/0x4ffc00000, data 0x2281357/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9323000/0x0/0x4ffc00000, data 0x2281357/0x2341000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 23035904 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308136 data_alloc: 234881024 data_used: 12812288
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 23707648 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9328000/0x0/0x4ffc00000, data 0x2284357/0x2344000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 23707648 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9328000/0x0/0x4ffc00000, data 0x2284357/0x2344000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741fcc00 session 0x557a75eab680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75335800 session 0x557a7291c000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 23707648 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 23707648 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.469938278s of 10.638383865s, submitted: 48
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741fe400 session 0x557a73d3ed20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x18e9357/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 26722304 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x18e9357/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202188 data_alloc: 218103808 data_used: 8069120
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 26722304 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110051328 unmapped: 26992640 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110051328 unmapped: 26992640 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9cc3000/0x0/0x4ffc00000, data 0x18e9357/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c1f400 session 0x557a75c63e00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741ff000 session 0x557a73d103c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b67c00 session 0x557a76380000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166010 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166010 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166010 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26212 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166010 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166010 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 27910144 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75bad000 session 0x557a72fad860
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c39c00 session 0x557a7662e5a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741ff000 session 0x557a75a081e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c1f400 session 0x557a75a094a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.266065598s of 28.455215454s, submitted: 42
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b67c00 session 0x557a75a09e00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75bad000 session 0x557a75a08000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a72fba400 session 0x557a75f40f00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741ff000 session 0x557a752863c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c1f400 session 0x557a75f40000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 28401664 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bd2357/0x1c92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 28401664 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bd2357/0x1c92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 28401664 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218561 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 28401664 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 28401664 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741fb800 session 0x557a752f63c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bd2357/0x1c92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 28401664 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b64400 session 0x557a752f6000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75394800 session 0x557a752801e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741fb800 session 0x557a75280780
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 28377088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bd2357/0x1c92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 28377088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218561 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 28368896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bd2357/0x1c92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263381 data_alloc: 234881024 data_used: 13549568
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bd2357/0x1c92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bd2357/0x1c92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263381 data_alloc: 234881024 data_used: 13549568
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 26411008 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.172365189s of 20.263540268s, submitted: 23
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20889600 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361445 data_alloc: 234881024 data_used: 14790656
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8da1000/0x0/0x4ffc00000, data 0x280b357/0x28cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8da1000/0x0/0x4ffc00000, data 0x280b357/0x28cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359829 data_alloc: 234881024 data_used: 14790656
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d9e000/0x0/0x4ffc00000, data 0x280e357/0x28ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d9e000/0x0/0x4ffc00000, data 0x280e357/0x28ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d9e000/0x0/0x4ffc00000, data 0x280e357/0x28ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359829 data_alloc: 234881024 data_used: 14790656
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d9e000/0x0/0x4ffc00000, data 0x280e357/0x28ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d9e000/0x0/0x4ffc00000, data 0x280e357/0x28ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 22183936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a72fb6000 session 0x557a73d0e3c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a72fb6400 session 0x557a74268b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359981 data_alloc: 234881024 data_used: 14794752
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 22175744 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a72fb4000 session 0x557a73d3eb40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.381097794s of 19.617059708s, submitted: 100
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 22175744 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 22175744 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d9c000/0x0/0x4ffc00000, data 0x2810357/0x28d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 22175744 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 22175744 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d9c000/0x0/0x4ffc00000, data 0x2810357/0x28d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360213 data_alloc: 234881024 data_used: 14794752
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 22175744 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a756b4000 session 0x557a741f2f00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a72fba800 session 0x557a763801e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75396800 session 0x557a754a61e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a77ba2c00 session 0x557a74298960
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a7604e400 session 0x557a75766b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 19988480 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a72fba800 session 0x557a75f41680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 22085632 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 22085632 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8908000/0x0/0x4ffc00000, data 0x2ca4357/0x2d64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,0,0,0,3,2])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 22061056 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395263 data_alloc: 234881024 data_used: 14794752
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 23273472 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741fec00 session 0x557a75da6f00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 23248896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a767e1400 session 0x557a750a8960
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 23248896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b68c00 session 0x557a768934a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.521374702s of 11.660951614s, submitted: 219
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75334400 session 0x557a76892f00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8907000/0x0/0x4ffc00000, data 0x2ca4366/0x2d65000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 23240704 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 23240704 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405384 data_alloc: 234881024 data_used: 15605760
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x2ca5366/0x2d66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x2ca5366/0x2d66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x2ca5366/0x2d66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424536 data_alloc: 234881024 data_used: 18468864
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x2ca5366/0x2d66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 21217280 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424712 data_alloc: 234881024 data_used: 18468864
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8904000/0x0/0x4ffc00000, data 0x2ca6366/0x2d67000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 21200896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.535853386s of 12.574276924s, submitted: 14
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 20004864 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8639000/0x0/0x4ffc00000, data 0x2f72366/0x3033000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 19857408 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 19857408 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 19857408 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443762 data_alloc: 234881024 data_used: 18501632
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 19857408 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 19857408 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8634000/0x0/0x4ffc00000, data 0x2f76366/0x3037000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 19849216 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 19849216 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 19849216 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443762 data_alloc: 234881024 data_used: 18501632
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 19849216 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 19841024 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 19841024 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8633000/0x0/0x4ffc00000, data 0x2f77366/0x3038000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 19841024 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 19841024 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443938 data_alloc: 234881024 data_used: 18501632
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 19841024 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75334400 session 0x557a7689be00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.107069969s of 15.211624146s, submitted: 19
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a72fba800 session 0x557a73d3e960
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a772c1c00 session 0x557a75a085a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d99000/0x0/0x4ffc00000, data 0x2813357/0x28d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364830 data_alloc: 234881024 data_used: 14794752
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f8d99000/0x0/0x4ffc00000, data 0x2813357/0x28d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741ff000 session 0x557a72fadc20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 21209088 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364830 data_alloc: 234881024 data_used: 14794752
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.644480705s of 10.140521049s, submitted: 47
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b64000 session 0x557a768925a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180756 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855054201' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180756 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180756 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180756 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180756 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180756 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 27779072 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.462991714s of 30.466070175s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c98400 session 0x557a750a94a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a756b4000 session 0x557a752f7c20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74273c00 session 0x557a75f401e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75394400 session 0x557a75f40000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b6b400 session 0x557a75f403c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 28499968 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 28499968 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9e31000/0x0/0x4ffc00000, data 0x177a3b9/0x183b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 28491776 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 28491776 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199034 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 28491776 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 28491776 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9e31000/0x0/0x4ffc00000, data 0x177a3b9/0x183b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 28491776 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29188096 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c4ac00 session 0x557a752863c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29188096 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201968 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9e30000/0x0/0x4ffc00000, data 0x177a3dc/0x183c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29188096 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9e30000/0x0/0x4ffc00000, data 0x177a3dc/0x183c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29171712 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29171712 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29171712 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.107866287s of 12.832972527s, submitted: 42
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c1e800 session 0x557a73d554a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c46c00 session 0x557a7662fc20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b67000 session 0x557a75da7a40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185984 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185984 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185984 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.143586159s of 12.298224449s, submitted: 51
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 29712384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a7604ec00 session 0x557a75f40b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c1e800 session 0x557a72b48d20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c46c00 session 0x557a752801e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a74c4ac00 session 0x557a752f6000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b67000 session 0x557a752863c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29065216 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29065216 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: mgrc ms_handle_reset ms_handle_reset con 0x557a74c39800
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4099200288
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4099200288,v1:192.168.122.100:6801/4099200288]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: mgrc handle_mgr_configure stats_period=5
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 28999680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x1ac9357/0x1b89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227909 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 28999680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 28999680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x1ac9357/0x1b89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 28999680 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 29032448 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f9ae3000/0x0/0x4ffc00000, data 0x1ac9357/0x1b89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 28688384 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265585 data_alloc: 234881024 data_used: 12476416
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a741d8800 session 0x557a7338a000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109404160 unmapped: 27639808 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.196928024s of 10.001055717s, submitted: 21
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 27746304 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 29548544 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 ms_handle_reset con 0x557a75b6b000 session 0x557a74268f00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 29777920 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 29745152 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 29745152 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 29745152 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: mgrc handle_mgr_map Got map version 31
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4099200288,v1:192.168.122.100:6801/4099200288]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 29630464 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 29630464 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 29630464 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 29630464 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 29630464 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 29614080 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:42 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:45:42 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:42.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 29622272 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189273 data_alloc: 218103808 data_used: 6930432
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 79.088951111s of 79.517173767s, submitted: 17
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fa042000/0x0/0x4ffc00000, data 0x156a357/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 29769728 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74c99c00 session 0x557a7527b4a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 29761536 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 29753344 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: mgrc handle_mgr_map Got map version 32
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/4099200288,v1:192.168.122.100:6801/4099200288]
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 29933568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 29933568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 29933568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 29933568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 29933568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 29933568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 29933568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 29933568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 29925376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192959 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 29917184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 29908992 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 29908992 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 29908992 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.464805603s of 49.482944489s, submitted: 6
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 29892608 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75b6bc00 session 0x557a76893a40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212984 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74c1f400 session 0x557a768665a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74c1f400 session 0x557a75a081e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a741d8800 session 0x557a74268960
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74c99c00 session 0x557a75281860
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 29360128 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 29360128 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 29360128 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 29360128 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9e67000/0x0/0x4ffc00000, data 0x174439d/0x1805000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 29360128 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212984 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 29360128 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29351936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29351936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9e67000/0x0/0x4ffc00000, data 0x174439d/0x1805000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75334c00 session 0x557a767dba40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29351936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29351936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212984 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a72fbb400 session 0x557a752f7c20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29351936 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9e67000/0x0/0x4ffc00000, data 0x174439d/0x1805000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a72fbb400 session 0x557a76866b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a741d8800 session 0x557a754a7860
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107708416 unmapped: 29335552 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107708416 unmapped: 29335552 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107708416 unmapped: 29335552 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.066913605s of 14.131144524s, submitted: 17
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213116 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9e67000/0x0/0x4ffc00000, data 0x174439d/0x1805000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223908 data_alloc: 218103808 data_used: 8564736
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9e67000/0x0/0x4ffc00000, data 0x174439d/0x1805000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9e67000/0x0/0x4ffc00000, data 0x174439d/0x1805000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 29327360 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223908 data_alloc: 218103808 data_used: 8564736
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.953133583s of 11.957569122s, submitted: 1
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 28844032 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 22798336 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9659000/0x0/0x4ffc00000, data 0x1f4c39d/0x200d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 23404544 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 23404544 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f95ac000/0x0/0x4ffc00000, data 0x1ff139d/0x20b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 23248896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307832 data_alloc: 234881024 data_used: 9420800
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 23248896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 23248896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 23248896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 24272896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9592000/0x0/0x4ffc00000, data 0x201939d/0x20da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 24272896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302208 data_alloc: 234881024 data_used: 9453568
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 24272896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 24272896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 24272896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 24272896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9592000/0x0/0x4ffc00000, data 0x201939d/0x20da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 24272896 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302208 data_alloc: 234881024 data_used: 9453568
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.142307281s of 14.435538292s, submitted: 99
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74c1f400 session 0x557a7689a000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24248320 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75335000 session 0x557a752892c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199928 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199928 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199928 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199928 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199928 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 24895488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.687702179s of 27.890380859s, submitted: 24
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2f000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 24805376 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74200800 session 0x557a757654a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74200800 session 0x557a75a09e00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a72fbb400 session 0x557a74268b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112214016 unmapped: 24829952 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224813 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a741d8800 session 0x557a76818b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74c1f400 session 0x557a75eab0e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112214016 unmapped: 24829952 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112214016 unmapped: 24829952 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112214016 unmapped: 24829952 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9a16000/0x0/0x4ffc00000, data 0x178539d/0x1846000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112214016 unmapped: 24829952 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75335000 session 0x557a763814a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9a16000/0x0/0x4ffc00000, data 0x178539d/0x1846000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75396c00 session 0x557a73d10d20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 24813568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224589 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 24813568 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75b6ac00 session 0x557a750a8b40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75334800 session 0x557a75a09c20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.479331017s of 10.105368614s, submitted: 30
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112263168 unmapped: 24780800 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112263168 unmapped: 24780800 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229049 data_alloc: 218103808 data_used: 6963200
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9a14000/0x0/0x4ffc00000, data 0x17853d0/0x1848000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236193 data_alloc: 218103808 data_used: 8019968
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9a14000/0x0/0x4ffc00000, data 0x17853d0/0x1848000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 24797184 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 24969216 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 24969216 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236193 data_alloc: 218103808 data_used: 8019968
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.585978508s of 12.800091743s, submitted: 2
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9a14000/0x0/0x4ffc00000, data 0x17853d0/0x1848000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,1,0,1,1])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 24109056 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 22847488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75b67800 session 0x557a74269860
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a741d9400 session 0x557a72b48f00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75334800 session 0x557a73d53860
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75396c00 session 0x557a7338ba40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75b67800 session 0x557a74299680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 22896640 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 22880256 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 22872064 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1290398 data_alloc: 218103808 data_used: 8261632
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74205400 session 0x557a742992c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a756b4000 session 0x557a768932c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 22863872 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f94e3000/0x0/0x4ffc00000, data 0x1cb53d0/0x1d78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74205400 session 0x557a75da70e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 22863872 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75334800 session 0x557a75f41e00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 22863872 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 22863872 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 22372352 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309512 data_alloc: 234881024 data_used: 11042816
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f94e3000/0x0/0x4ffc00000, data 0x1cb53e0/0x1d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 22372352 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f94e3000/0x0/0x4ffc00000, data 0x1cb53e0/0x1d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 22372352 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f94e3000/0x0/0x4ffc00000, data 0x1cb53e0/0x1d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 22339584 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 22339584 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f94e3000/0x0/0x4ffc00000, data 0x1cb53e0/0x1d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 22306816 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309512 data_alloc: 234881024 data_used: 11042816
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 22306816 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 22306816 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 22306816 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 22306816 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f94e3000/0x0/0x4ffc00000, data 0x1cb53e0/0x1d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 22306816 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309968 data_alloc: 234881024 data_used: 11055104
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.263778687s of 19.926912308s, submitted: 51
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115793920 unmapped: 21250048 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 21233664 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20799488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20799488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f918c000/0x0/0x4ffc00000, data 0x200c3e0/0x20d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20799488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339012 data_alloc: 234881024 data_used: 11218944
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20799488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20799488 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20668416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f916b000/0x0/0x4ffc00000, data 0x202d3e0/0x20f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20668416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20668416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337700 data_alloc: 234881024 data_used: 11218944
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20668416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20668416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20668416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.488512039s of 12.818861961s, submitted: 38
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f916b000/0x0/0x4ffc00000, data 0x202d3e0/0x20f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20668416 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20750336 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337956 data_alloc: 234881024 data_used: 11218944
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75396c00 session 0x557a768674a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75b67800 session 0x557a75a68000
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20750336 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74242800 session 0x557a740cd4a0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 22880256 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f97be000/0x0/0x4ffc00000, data 0x19da3e0/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 22880256 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f97bf000/0x0/0x4ffc00000, data 0x19da3d0/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 22880256 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 22880256 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267855 data_alloc: 218103808 data_used: 8261632
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a77ba2800 session 0x557a76893e00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a772c1400 session 0x557a73d52960
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75394000 session 0x557a7662fe00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216254 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23314432 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216254 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216254 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216254 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216254 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75b66800 session 0x557a768921e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a741fa400 session 0x557a73db4780
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75394000 session 0x557a75e59a40
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75b66800 session 0x557a7662d2c0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 113655808 unmapped: 23388160 heap: 137043968 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.667488098s of 34.980655670s, submitted: 75
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a772c1400 session 0x557a75280780
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a77ba2800 session 0x557a7662ef00
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75f2bc00 session 0x557a75a09680
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75f2bc00 session 0x557a75289c20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75394000 session 0x557a7662f0e0
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 29294592 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 29294592 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299951 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 29294592 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x201e40f/0x20e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 29294592 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 29294592 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 29294592 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 29294592 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299951 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 29294592 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x201e40f/0x20e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 26230784 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 21929984 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x201e40f/0x20e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 21929984 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 21929984 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372911 data_alloc: 234881024 data_used: 17809408
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 21929984 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 21929984 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 21929984 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 21929984 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x201e40f/0x20e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 21897216 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372911 data_alloc: 234881024 data_used: 17809408
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121446400 unmapped: 21897216 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x201e40f/0x20e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 21864448 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.660846710s of 19.786142349s, submitted: 33
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 124854272 unmapped: 18489344 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x201e40f/0x20e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125943808 unmapped: 17399808 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 17604608 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420491 data_alloc: 234881024 data_used: 18358272
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 17604608 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 17604608 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 17604608 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f8c1d000/0x0/0x4ffc00000, data 0x257440f/0x2637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 17539072 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 17539072 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420491 data_alloc: 234881024 data_used: 18358272
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 17416192 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 17416192 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x259340f/0x2656000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 17416192 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 17416192 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 17416192 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416131 data_alloc: 234881024 data_used: 18362368
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x259340f/0x2656000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.514573097s of 12.755817413s, submitted: 88
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a75f2b800 session 0x557a75f8ed20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 17416192 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 17416192 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25346048 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 ms_handle_reset con 0x557a74201c00 session 0x557a73d10d20
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3034 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2058 writes, 6494 keys, 2058 commit groups, 1.0 writes per commit group, ingest: 6.25 MB, 0.01 MB/s#012Interval WAL: 2058 writes, 905 syncs, 2.27 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'config diff' '{prefix=config diff}'
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'config show' '{prefix=config show}'
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'counter dump' '{prefix=counter dump}'
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25337856 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'counter schema' '{prefix=counter schema}'
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 25657344 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: osd.0 147 heartbeat osd_stat(store_statfs(0x4f9c2e000/0x0/0x4ffc00000, data 0x156c39d/0x162d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 25722880 heap: 143343616 old mem: 2845415832 new mem: 2845415832
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226349 data_alloc: 218103808 data_used: 6950912
Jan 21 11:45:42 np0005590810 ceph-osd[82794]: do_command 'log dump' '{prefix=log dump}'
Jan 21 11:45:43 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:43 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:43 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:43 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:43.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:43 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16455 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:43 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.35999 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:43 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26227 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 11:45:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3846783337' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 11:45:43 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16467 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:43 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26242 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:43 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 21 11:45:43 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937542134' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 21 11:45:44 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16485 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:44 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26254 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:44 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 21 11:45:44 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/691186895' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 21 11:45:44 np0005590810 nova_compute[251104]: 2026-01-21 16:45:44.400 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:44 np0005590810 nova_compute[251104]: 2026-01-21 16:45:44.401 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:44 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16497 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:44 np0005590810 podman[277253]: 2026-01-21 16:45:44.463260189 +0000 UTC m=+0.117915424 container health_status 9d1a12c74dee6fcfa33d0fc8f53635b7a61f17a05a8ed717cf698246a875714b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '561c1eaae6d9bf97079e9aebc0d5cdd85435eb8878d6e4816bf05d6668d6b2d5-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b-b728735954aa1ce0b03a4f676f38fb04f6bbf64e189ae21f8a75aca8c0e4494b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 21 11:45:44 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26269 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:44 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16512 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:44 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26281 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:44 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:44 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:44 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:44.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:45 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:45 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:45 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:45 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:45.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:45 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16530 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 21 11:45:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999278991' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 21 11:45:45 np0005590810 nova_compute[251104]: 2026-01-21 16:45:45.456 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:45 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-mgr-compute-0-ygffhs[74667]: ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:45] "GET /metrics HTTP/1.1" 200 48658 "" "Prometheus/2.51.0"
Jan 21 11:45:45 np0005590810 ceph-mgr[74671]: [prometheus INFO cherrypy.access.140227905715840] ::ffff:192.168.122.100 - - [21/Jan/2026:16:45:45] "GET /metrics HTTP/1.1" 200 48658 "" "Prometheus/2.51.0"
Jan 21 11:45:45 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16539 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:45 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 21 11:45:45 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4059747792' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 21 11:45:46 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36086 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1719236203' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039987462' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 21 11:45:46 np0005590810 nova_compute[251104]: 2026-01-21 16:45:46.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:46 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36101 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241312907' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 21 11:45:46 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433286758' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 21 11:45:46 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:46 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:46 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:46.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:47 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36119 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:47 np0005590810 nova_compute[251104]: 2026-01-21 16:45:47.069 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:47 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:47 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:47 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:47 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:47.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 21 11:45:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3540815742' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 21 11:45:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:47.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 21 11:45:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:47.235Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:45:47 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:47.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:45:47 np0005590810 nova_compute[251104]: 2026-01-21 16:45:47.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 21 11:45:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467737236' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 21 11:45:47 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36131 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 21 11:45:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379051370' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 21 11:45:47 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36143 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:47 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 21 11:45:47 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2853589640' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 21 11:45:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 21 11:45:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381113094' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 21 11:45:48 np0005590810 systemd[1]: Starting Hostname Service...
Jan 21 11:45:48 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36155 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:48 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26407 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:48 np0005590810 systemd[1]: Started Hostname Service.
Jan 21 11:45:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 21 11:45:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2874315010' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 21 11:45:48 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 11:45:48 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079191186' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 11:45:48 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36167 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:48 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26419 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:48 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26425 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:48.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 21 11:45:48 np0005590810 ceph-d9745984-fea8-5195-8ec5-61f685b5c785-alertmanager-compute-0[104775]: ts=2026-01-21T16:45:48.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 21 11:45:48 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:48 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:48 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:48.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:49 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36182 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530451049' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893330346' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26434 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 21 11:45:49 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:49 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:49 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:49 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:49.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.368 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.368 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.368 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.401 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.402 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:49 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36194 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.436 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.437 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.437 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.437 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.437 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:45:49 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26446 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16650 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4233251464' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26470 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:45:49 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244303777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:45:49 np0005590810 nova_compute[251104]: 2026-01-21 16:45:49.981 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:45:50 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16677 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:50 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16683 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:50 np0005590810 nova_compute[251104]: 2026-01-21 16:45:50.183 251108 WARNING nova.virt.libvirt.driver [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 11:45:50 np0005590810 nova_compute[251104]: 2026-01-21 16:45:50.186 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4272MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 11:45:50 np0005590810 nova_compute[251104]: 2026-01-21 16:45:50.186 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 11:45:50 np0005590810 nova_compute[251104]: 2026-01-21 16:45:50.187 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 11:45:50 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 21 11:45:50 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1311082988' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 21 11:45:50 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26485 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:50 np0005590810 nova_compute[251104]: 2026-01-21 16:45:50.459 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:50 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16698 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:50 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26512 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:50 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16707 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:50 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:50 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:50 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2538959863' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.109 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.109 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 11:45:51 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.155 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 11:45:51 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:51 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 11:45:51 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:51.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 11:45:51 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26524 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 21 11:45:51 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16716 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:51 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36263 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188775539' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 11:45:51 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151427380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.688 251108 DEBUG oslo_concurrency.processutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.696 251108 DEBUG nova.compute.provider_tree [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed in ProviderTree for provider: 2519faba-4002-49a2-b483-5098e748d2b5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.717 251108 DEBUG nova.scheduler.client.report [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Inventory has not changed for provider 2519faba-4002-49a2-b483-5098e748d2b5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.719 251108 DEBUG nova.compute.resource_tracker [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.719 251108 DEBUG oslo_concurrency.lockutils [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 11:45:51 np0005590810 nova_compute[251104]: 2026-01-21 16:45:51.719 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:51 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16749 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:52 np0005590810 nova_compute[251104]: 2026-01-21 16:45:52.072 251108 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 21 11:45:52 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16758 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:52 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 21 11:45:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3391849976' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 21 11:45:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 21 11:45:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 21 11:45:52 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16770 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 11:45:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 21 11:45:52 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 21 11:45:52 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26608 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:52 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:52 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:52 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:53 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:53 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:53 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:53 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:53.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:53 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.36347 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:53 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 21 11:45:53 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3219757017' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 21 11:45:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 21 11:45:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/60910912' entity='mgr.compute-0.ygffhs' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 21 11:45:54 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.16842 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 11:45:54 np0005590810 nova_compute[251104]: 2026-01-21 16:45:54.698 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:54 np0005590810 nova_compute[251104]: 2026-01-21 16:45:54.724 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:54 np0005590810 nova_compute[251104]: 2026-01-21 16:45:54.725 251108 DEBUG oslo_service.periodic_task [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 11:45:54 np0005590810 nova_compute[251104]: 2026-01-21 16:45:54.725 251108 DEBUG nova.compute.manager [None req-075feeb5-3df0-4839-a6c3-e9490c9a48f3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 11:45:54 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:54 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 11:45:54 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.100 - anonymous [21/Jan/2026:16:45:54.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 11:45:54 np0005590810 ceph-mon[74380]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 21 11:45:54 np0005590810 ceph-mon[74380]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473118258' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 21 11:45:55 np0005590810 ceph-mgr[74671]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 21 11:45:55 np0005590810 radosgw[94128]: ====== starting new request req=0x7f19172435d0 =====
Jan 21 11:45:55 np0005590810 radosgw[94128]: ====== req done req=0x7f19172435d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 11:45:55 np0005590810 radosgw[94128]: beast: 0x7f19172435d0: 192.168.122.102 - anonymous [21/Jan/2026:16:45:55.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 11:45:55 np0005590810 ceph-mgr[74671]: log_channel(audit) log [DBG] : from='client.26671 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
